Test Report: KVM_Linux_crio 18774

                    
                      9d63d58ff18723161685b0b8e892cfd1b7c2a23e:2024-04-29:34260
                    
                

Test fail (32/311)

Order failed test Duration
30 TestAddons/parallel/Ingress 158.71
32 TestAddons/parallel/MetricsServer 337.59
44 TestAddons/StoppedEnableDisable 154.45
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.15
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
163 TestMultiControlPlane/serial/StopSecondaryNode 142.32
165 TestMultiControlPlane/serial/RestartSecondaryNode 62.46
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 408.64
170 TestMultiControlPlane/serial/StopCluster 142.08
230 TestMultiNode/serial/RestartKeepsNodes 310.14
232 TestMultiNode/serial/StopMultiNode 141.53
239 TestPreload 267.66
247 TestKubernetesUpgrade 403.26
282 TestPause/serial/SecondStartNoReconfiguration 62.4
284 TestStartStop/group/old-k8s-version/serial/FirstStart 294.32
291 TestStartStop/group/embed-certs/serial/Stop 139.56
294 TestStartStop/group/no-preload/serial/Stop 139.08
299 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.05
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 88.24
304 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.39
308 TestStartStop/group/old-k8s-version/serial/SecondStart 722.16
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.46
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.49
313 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.49
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.52
315 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 538.28
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 344.69
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 312.39
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 149.27
x
+
TestAddons/parallel/Ingress (158.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-412183 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-412183 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-412183 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bbcc8ec6-e9cc-473d-8d5e-e5fabf60cc5e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bbcc8ec6-e9cc-473d-8d5e-e5fabf60cc5e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.004497778s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-412183 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.138283759s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-412183 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.105
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-412183 addons disable ingress-dns --alsologtostderr -v=1: (1.327035672s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-412183 addons disable ingress --alsologtostderr -v=1: (7.871867307s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-412183 -n addons-412183
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-412183 logs -n 25: (1.438128312s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-450771 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |                     |
	|         | -p download-only-450771                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| delete  | -p download-only-450771                                                                     | download-only-450771 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| delete  | -p download-only-513783                                                                     | download-only-513783 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| delete  | -p download-only-450771                                                                     | download-only-450771 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-527606 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |                     |
	|         | binary-mirror-527606                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33939                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-527606                                                                     | binary-mirror-527606 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| addons  | disable dashboard -p                                                                        | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |                     |
	|         | addons-412183                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |                     |
	|         | addons-412183                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-412183 --wait=true                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:44 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | addons-412183                                                                               |                      |         |         |                     |                     |
	| ip      | addons-412183 ip                                                                            | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	| addons  | addons-412183 addons disable                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-412183 ssh curl -s                                                                   | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-412183 addons disable                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-412183 addons                                                                        | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-412183 ssh cat                                                                       | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | /opt/local-path-provisioner/pvc-44e4f926-cc71-46f4-8659-1c0700bd3215_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-412183 addons disable                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-412183 addons                                                                        | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:45 UTC | 29 Apr 24 18:45 UTC |
	|         | -p addons-412183                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:45 UTC | 29 Apr 24 18:45 UTC |
	|         | addons-412183                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:45 UTC | 29 Apr 24 18:45 UTC |
	|         | -p addons-412183                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-412183 ip                                                                            | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:46 UTC | 29 Apr 24 18:46 UTC |
	| addons  | addons-412183 addons disable                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:46 UTC | 29 Apr 24 18:46 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-412183 addons disable                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:46 UTC | 29 Apr 24 18:46 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 18:40:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 18:40:21.784513   15893 out.go:291] Setting OutFile to fd 1 ...
	I0429 18:40:21.784759   15893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:40:21.784768   15893 out.go:304] Setting ErrFile to fd 2...
	I0429 18:40:21.784773   15893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:40:21.784961   15893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 18:40:21.785683   15893 out.go:298] Setting JSON to false
	I0429 18:40:21.786597   15893 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1320,"bootTime":1714414702,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 18:40:21.786667   15893 start.go:139] virtualization: kvm guest
	I0429 18:40:21.788842   15893 out.go:177] * [addons-412183] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 18:40:21.791385   15893 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 18:40:21.792814   15893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 18:40:21.791429   15893 notify.go:220] Checking for updates...
	I0429 18:40:21.795404   15893 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:40:21.796729   15893 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:40:21.798048   15893 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 18:40:21.799373   15893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 18:40:21.800728   15893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 18:40:21.831646   15893 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 18:40:21.832742   15893 start.go:297] selected driver: kvm2
	I0429 18:40:21.832755   15893 start.go:901] validating driver "kvm2" against <nil>
	I0429 18:40:21.832766   15893 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 18:40:21.833473   15893 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:40:21.833550   15893 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 18:40:21.847903   15893 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 18:40:21.847958   15893 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 18:40:21.848156   15893 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 18:40:21.848215   15893 cni.go:84] Creating CNI manager for ""
	I0429 18:40:21.848231   15893 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 18:40:21.848238   15893 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 18:40:21.848291   15893 start.go:340] cluster config:
	{Name:addons-412183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-412183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:40:21.848390   15893 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:40:21.850108   15893 out.go:177] * Starting "addons-412183" primary control-plane node in "addons-412183" cluster
	I0429 18:40:21.851536   15893 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 18:40:21.851573   15893 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 18:40:21.851595   15893 cache.go:56] Caching tarball of preloaded images
	I0429 18:40:21.851675   15893 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 18:40:21.851689   15893 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 18:40:21.852090   15893 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/config.json ...
	I0429 18:40:21.852125   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/config.json: {Name:mk0047e96bc96b9616a4f565ad62819443d7eb7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:21.852265   15893 start.go:360] acquireMachinesLock for addons-412183: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 18:40:21.852331   15893 start.go:364] duration metric: took 42.562µs to acquireMachinesLock for "addons-412183"
	I0429 18:40:21.852355   15893 start.go:93] Provisioning new machine with config: &{Name:addons-412183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-412183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 18:40:21.852415   15893 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 18:40:21.854089   15893 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0429 18:40:21.854216   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:40:21.854263   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:40:21.868333   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0429 18:40:21.868840   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:40:21.869365   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:40:21.869388   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:40:21.869696   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:40:21.869879   15893 main.go:141] libmachine: (addons-412183) Calling .GetMachineName
	I0429 18:40:21.870020   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:21.870171   15893 start.go:159] libmachine.API.Create for "addons-412183" (driver="kvm2")
	I0429 18:40:21.870201   15893 client.go:168] LocalClient.Create starting
	I0429 18:40:21.870236   15893 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 18:40:21.936161   15893 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 18:40:22.153075   15893 main.go:141] libmachine: Running pre-create checks...
	I0429 18:40:22.153101   15893 main.go:141] libmachine: (addons-412183) Calling .PreCreateCheck
	I0429 18:40:22.153643   15893 main.go:141] libmachine: (addons-412183) Calling .GetConfigRaw
	I0429 18:40:22.154052   15893 main.go:141] libmachine: Creating machine...
	I0429 18:40:22.154091   15893 main.go:141] libmachine: (addons-412183) Calling .Create
	I0429 18:40:22.154231   15893 main.go:141] libmachine: (addons-412183) Creating KVM machine...
	I0429 18:40:22.155517   15893 main.go:141] libmachine: (addons-412183) DBG | found existing default KVM network
	I0429 18:40:22.156390   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:22.156242   15915 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0429 18:40:22.156478   15893 main.go:141] libmachine: (addons-412183) DBG | created network xml: 
	I0429 18:40:22.156501   15893 main.go:141] libmachine: (addons-412183) DBG | <network>
	I0429 18:40:22.156513   15893 main.go:141] libmachine: (addons-412183) DBG |   <name>mk-addons-412183</name>
	I0429 18:40:22.156523   15893 main.go:141] libmachine: (addons-412183) DBG |   <dns enable='no'/>
	I0429 18:40:22.156532   15893 main.go:141] libmachine: (addons-412183) DBG |   
	I0429 18:40:22.156546   15893 main.go:141] libmachine: (addons-412183) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 18:40:22.156558   15893 main.go:141] libmachine: (addons-412183) DBG |     <dhcp>
	I0429 18:40:22.156570   15893 main.go:141] libmachine: (addons-412183) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 18:40:22.156580   15893 main.go:141] libmachine: (addons-412183) DBG |     </dhcp>
	I0429 18:40:22.156590   15893 main.go:141] libmachine: (addons-412183) DBG |   </ip>
	I0429 18:40:22.156603   15893 main.go:141] libmachine: (addons-412183) DBG |   
	I0429 18:40:22.156615   15893 main.go:141] libmachine: (addons-412183) DBG | </network>
	I0429 18:40:22.156623   15893 main.go:141] libmachine: (addons-412183) DBG | 
	I0429 18:40:22.161890   15893 main.go:141] libmachine: (addons-412183) DBG | trying to create private KVM network mk-addons-412183 192.168.39.0/24...
	I0429 18:40:22.226644   15893 main.go:141] libmachine: (addons-412183) DBG | private KVM network mk-addons-412183 192.168.39.0/24 created
	I0429 18:40:22.226670   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:22.226630   15915 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:40:22.226695   15893 main.go:141] libmachine: (addons-412183) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183 ...
	I0429 18:40:22.226715   15893 main.go:141] libmachine: (addons-412183) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 18:40:22.226778   15893 main.go:141] libmachine: (addons-412183) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 18:40:22.474365   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:22.474272   15915 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa...
	I0429 18:40:22.848313   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:22.848167   15915 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/addons-412183.rawdisk...
	I0429 18:40:22.848342   15893 main.go:141] libmachine: (addons-412183) DBG | Writing magic tar header
	I0429 18:40:22.848352   15893 main.go:141] libmachine: (addons-412183) DBG | Writing SSH key tar header
	I0429 18:40:22.848360   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:22.848277   15915 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183 ...
	I0429 18:40:22.848370   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183
	I0429 18:40:22.848380   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 18:40:22.848388   15893 main.go:141] libmachine: (addons-412183) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183 (perms=drwx------)
	I0429 18:40:22.848415   15893 main.go:141] libmachine: (addons-412183) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 18:40:22.848423   15893 main.go:141] libmachine: (addons-412183) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 18:40:22.848432   15893 main.go:141] libmachine: (addons-412183) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 18:40:22.848440   15893 main.go:141] libmachine: (addons-412183) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 18:40:22.848447   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:40:22.848455   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 18:40:22.848464   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 18:40:22.848473   15893 main.go:141] libmachine: (addons-412183) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 18:40:22.848479   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home/jenkins
	I0429 18:40:22.848489   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home
	I0429 18:40:22.848495   15893 main.go:141] libmachine: (addons-412183) DBG | Skipping /home - not owner
	I0429 18:40:22.848502   15893 main.go:141] libmachine: (addons-412183) Creating domain...
	I0429 18:40:22.849830   15893 main.go:141] libmachine: (addons-412183) define libvirt domain using xml: 
	I0429 18:40:22.849864   15893 main.go:141] libmachine: (addons-412183) <domain type='kvm'>
	I0429 18:40:22.849875   15893 main.go:141] libmachine: (addons-412183)   <name>addons-412183</name>
	I0429 18:40:22.849893   15893 main.go:141] libmachine: (addons-412183)   <memory unit='MiB'>4000</memory>
	I0429 18:40:22.849904   15893 main.go:141] libmachine: (addons-412183)   <vcpu>2</vcpu>
	I0429 18:40:22.849919   15893 main.go:141] libmachine: (addons-412183)   <features>
	I0429 18:40:22.849931   15893 main.go:141] libmachine: (addons-412183)     <acpi/>
	I0429 18:40:22.849941   15893 main.go:141] libmachine: (addons-412183)     <apic/>
	I0429 18:40:22.849950   15893 main.go:141] libmachine: (addons-412183)     <pae/>
	I0429 18:40:22.849962   15893 main.go:141] libmachine: (addons-412183)     
	I0429 18:40:22.849974   15893 main.go:141] libmachine: (addons-412183)   </features>
	I0429 18:40:22.849993   15893 main.go:141] libmachine: (addons-412183)   <cpu mode='host-passthrough'>
	I0429 18:40:22.850023   15893 main.go:141] libmachine: (addons-412183)   
	I0429 18:40:22.850048   15893 main.go:141] libmachine: (addons-412183)   </cpu>
	I0429 18:40:22.850058   15893 main.go:141] libmachine: (addons-412183)   <os>
	I0429 18:40:22.850088   15893 main.go:141] libmachine: (addons-412183)     <type>hvm</type>
	I0429 18:40:22.850099   15893 main.go:141] libmachine: (addons-412183)     <boot dev='cdrom'/>
	I0429 18:40:22.850110   15893 main.go:141] libmachine: (addons-412183)     <boot dev='hd'/>
	I0429 18:40:22.850120   15893 main.go:141] libmachine: (addons-412183)     <bootmenu enable='no'/>
	I0429 18:40:22.850129   15893 main.go:141] libmachine: (addons-412183)   </os>
	I0429 18:40:22.850138   15893 main.go:141] libmachine: (addons-412183)   <devices>
	I0429 18:40:22.850150   15893 main.go:141] libmachine: (addons-412183)     <disk type='file' device='cdrom'>
	I0429 18:40:22.850176   15893 main.go:141] libmachine: (addons-412183)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/boot2docker.iso'/>
	I0429 18:40:22.850192   15893 main.go:141] libmachine: (addons-412183)       <target dev='hdc' bus='scsi'/>
	I0429 18:40:22.850198   15893 main.go:141] libmachine: (addons-412183)       <readonly/>
	I0429 18:40:22.850205   15893 main.go:141] libmachine: (addons-412183)     </disk>
	I0429 18:40:22.850212   15893 main.go:141] libmachine: (addons-412183)     <disk type='file' device='disk'>
	I0429 18:40:22.850220   15893 main.go:141] libmachine: (addons-412183)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 18:40:22.850228   15893 main.go:141] libmachine: (addons-412183)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/addons-412183.rawdisk'/>
	I0429 18:40:22.850236   15893 main.go:141] libmachine: (addons-412183)       <target dev='hda' bus='virtio'/>
	I0429 18:40:22.850242   15893 main.go:141] libmachine: (addons-412183)     </disk>
	I0429 18:40:22.850249   15893 main.go:141] libmachine: (addons-412183)     <interface type='network'>
	I0429 18:40:22.850255   15893 main.go:141] libmachine: (addons-412183)       <source network='mk-addons-412183'/>
	I0429 18:40:22.850260   15893 main.go:141] libmachine: (addons-412183)       <model type='virtio'/>
	I0429 18:40:22.850266   15893 main.go:141] libmachine: (addons-412183)     </interface>
	I0429 18:40:22.850276   15893 main.go:141] libmachine: (addons-412183)     <interface type='network'>
	I0429 18:40:22.850282   15893 main.go:141] libmachine: (addons-412183)       <source network='default'/>
	I0429 18:40:22.850292   15893 main.go:141] libmachine: (addons-412183)       <model type='virtio'/>
	I0429 18:40:22.850297   15893 main.go:141] libmachine: (addons-412183)     </interface>
	I0429 18:40:22.850304   15893 main.go:141] libmachine: (addons-412183)     <serial type='pty'>
	I0429 18:40:22.850326   15893 main.go:141] libmachine: (addons-412183)       <target port='0'/>
	I0429 18:40:22.850346   15893 main.go:141] libmachine: (addons-412183)     </serial>
	I0429 18:40:22.850361   15893 main.go:141] libmachine: (addons-412183)     <console type='pty'>
	I0429 18:40:22.850374   15893 main.go:141] libmachine: (addons-412183)       <target type='serial' port='0'/>
	I0429 18:40:22.850388   15893 main.go:141] libmachine: (addons-412183)     </console>
	I0429 18:40:22.850398   15893 main.go:141] libmachine: (addons-412183)     <rng model='virtio'>
	I0429 18:40:22.850414   15893 main.go:141] libmachine: (addons-412183)       <backend model='random'>/dev/random</backend>
	I0429 18:40:22.850432   15893 main.go:141] libmachine: (addons-412183)     </rng>
	I0429 18:40:22.850446   15893 main.go:141] libmachine: (addons-412183)     
	I0429 18:40:22.850457   15893 main.go:141] libmachine: (addons-412183)     
	I0429 18:40:22.850468   15893 main.go:141] libmachine: (addons-412183)   </devices>
	I0429 18:40:22.850479   15893 main.go:141] libmachine: (addons-412183) </domain>
	I0429 18:40:22.850493   15893 main.go:141] libmachine: (addons-412183) 
	I0429 18:40:22.856527   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:88:fc:c5 in network default
	I0429 18:40:22.857078   15893 main.go:141] libmachine: (addons-412183) Ensuring networks are active...
	I0429 18:40:22.857098   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:22.857702   15893 main.go:141] libmachine: (addons-412183) Ensuring network default is active
	I0429 18:40:22.857949   15893 main.go:141] libmachine: (addons-412183) Ensuring network mk-addons-412183 is active
	I0429 18:40:22.858418   15893 main.go:141] libmachine: (addons-412183) Getting domain xml...
	I0429 18:40:22.859144   15893 main.go:141] libmachine: (addons-412183) Creating domain...
	I0429 18:40:24.214001   15893 main.go:141] libmachine: (addons-412183) Waiting to get IP...
	I0429 18:40:24.214795   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:24.215152   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:24.215181   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:24.215143   15915 retry.go:31] will retry after 288.194622ms: waiting for machine to come up
	I0429 18:40:24.504738   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:24.505124   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:24.505148   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:24.505089   15915 retry.go:31] will retry after 245.840505ms: waiting for machine to come up
	I0429 18:40:24.752573   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:24.752929   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:24.752958   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:24.752895   15915 retry.go:31] will retry after 484.478167ms: waiting for machine to come up
	I0429 18:40:25.238615   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:25.238999   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:25.239025   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:25.238958   15915 retry.go:31] will retry after 474.929578ms: waiting for machine to come up
	I0429 18:40:25.715549   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:25.715870   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:25.715897   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:25.715820   15915 retry.go:31] will retry after 711.577824ms: waiting for machine to come up
	I0429 18:40:26.428691   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:26.429226   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:26.429257   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:26.429210   15915 retry.go:31] will retry after 704.057958ms: waiting for machine to come up
	I0429 18:40:27.134378   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:27.134698   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:27.134730   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:27.134646   15915 retry.go:31] will retry after 804.442246ms: waiting for machine to come up
	I0429 18:40:27.940759   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:27.941079   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:27.941110   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:27.941036   15915 retry.go:31] will retry after 1.318337249s: waiting for machine to come up
	I0429 18:40:29.261464   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:29.261881   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:29.261903   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:29.261851   15915 retry.go:31] will retry after 1.371381026s: waiting for machine to come up
	I0429 18:40:30.634325   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:30.634655   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:30.634718   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:30.634627   15915 retry.go:31] will retry after 2.146502423s: waiting for machine to come up
	I0429 18:40:32.782976   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:32.783473   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:32.783502   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:32.783429   15915 retry.go:31] will retry after 2.393799937s: waiting for machine to come up
	I0429 18:40:35.180130   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:35.180570   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:35.180618   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:35.180554   15915 retry.go:31] will retry after 3.630272395s: waiting for machine to come up
	I0429 18:40:38.812364   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:38.812741   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:38.812771   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:38.812690   15915 retry.go:31] will retry after 3.982338564s: waiting for machine to come up
	I0429 18:40:42.796447   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:42.796831   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:42.796858   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:42.796769   15915 retry.go:31] will retry after 5.362319181s: waiting for machine to come up
	I0429 18:40:48.160567   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.160897   15893 main.go:141] libmachine: (addons-412183) Found IP for machine: 192.168.39.105
	I0429 18:40:48.160928   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has current primary IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.160942   15893 main.go:141] libmachine: (addons-412183) Reserving static IP address...
	I0429 18:40:48.161249   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find host DHCP lease matching {name: "addons-412183", mac: "52:54:00:ae:0f:aa", ip: "192.168.39.105"} in network mk-addons-412183
	I0429 18:40:48.233647   15893 main.go:141] libmachine: (addons-412183) DBG | Getting to WaitForSSH function...
	I0429 18:40:48.233689   15893 main.go:141] libmachine: (addons-412183) Reserved static IP address: 192.168.39.105
	I0429 18:40:48.233702   15893 main.go:141] libmachine: (addons-412183) Waiting for SSH to be available...
	I0429 18:40:48.236113   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.236470   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.236496   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.236715   15893 main.go:141] libmachine: (addons-412183) DBG | Using SSH client type: external
	I0429 18:40:48.236745   15893 main.go:141] libmachine: (addons-412183) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa (-rw-------)
	I0429 18:40:48.236778   15893 main.go:141] libmachine: (addons-412183) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 18:40:48.236794   15893 main.go:141] libmachine: (addons-412183) DBG | About to run SSH command:
	I0429 18:40:48.236814   15893 main.go:141] libmachine: (addons-412183) DBG | exit 0
	I0429 18:40:48.366530   15893 main.go:141] libmachine: (addons-412183) DBG | SSH cmd err, output: <nil>: 
	I0429 18:40:48.366783   15893 main.go:141] libmachine: (addons-412183) KVM machine creation complete!
	I0429 18:40:48.367148   15893 main.go:141] libmachine: (addons-412183) Calling .GetConfigRaw
	I0429 18:40:48.367724   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:48.367929   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:48.368127   15893 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 18:40:48.368143   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:40:48.369258   15893 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 18:40:48.369272   15893 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 18:40:48.369278   15893 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 18:40:48.369284   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:48.371568   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.371944   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.371985   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.372073   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:48.372239   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.372383   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.372521   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:48.372646   15893 main.go:141] libmachine: Using SSH client type: native
	I0429 18:40:48.372857   15893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0429 18:40:48.372874   15893 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 18:40:48.477953   15893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 18:40:48.477984   15893 main.go:141] libmachine: Detecting the provisioner...
	I0429 18:40:48.477992   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:48.480945   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.481292   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.481324   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.481504   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:48.481712   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.481845   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.482020   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:48.482221   15893 main.go:141] libmachine: Using SSH client type: native
	I0429 18:40:48.482384   15893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0429 18:40:48.482395   15893 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 18:40:48.587366   15893 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 18:40:48.587469   15893 main.go:141] libmachine: found compatible host: buildroot
	I0429 18:40:48.587485   15893 main.go:141] libmachine: Provisioning with buildroot...
	I0429 18:40:48.587500   15893 main.go:141] libmachine: (addons-412183) Calling .GetMachineName
	I0429 18:40:48.587785   15893 buildroot.go:166] provisioning hostname "addons-412183"
	I0429 18:40:48.587809   15893 main.go:141] libmachine: (addons-412183) Calling .GetMachineName
	I0429 18:40:48.588009   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:48.590423   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.590744   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.590770   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.590872   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:48.591061   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.591208   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.591346   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:48.591492   15893 main.go:141] libmachine: Using SSH client type: native
	I0429 18:40:48.591644   15893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0429 18:40:48.591655   15893 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-412183 && echo "addons-412183" | sudo tee /etc/hostname
	I0429 18:40:48.716205   15893 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-412183
	
	I0429 18:40:48.716232   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:48.718545   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.718958   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.718981   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.719162   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:48.719347   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.719493   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.719651   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:48.719824   15893 main.go:141] libmachine: Using SSH client type: native
	I0429 18:40:48.719980   15893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0429 18:40:48.719995   15893 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-412183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-412183/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-412183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 18:40:48.832822   15893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 18:40:48.832848   15893 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 18:40:48.832901   15893 buildroot.go:174] setting up certificates
	I0429 18:40:48.832922   15893 provision.go:84] configureAuth start
	I0429 18:40:48.832934   15893 main.go:141] libmachine: (addons-412183) Calling .GetMachineName
	I0429 18:40:48.833212   15893 main.go:141] libmachine: (addons-412183) Calling .GetIP
	I0429 18:40:48.835702   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.836005   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.836027   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.836205   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:48.838491   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.838833   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.838857   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.839008   15893 provision.go:143] copyHostCerts
	I0429 18:40:48.839075   15893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 18:40:48.839217   15893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 18:40:48.839299   15893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 18:40:48.839382   15893 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.addons-412183 san=[127.0.0.1 192.168.39.105 addons-412183 localhost minikube]
	I0429 18:40:48.904456   15893 provision.go:177] copyRemoteCerts
	I0429 18:40:48.904510   15893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 18:40:48.904531   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:48.907044   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.907353   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.907376   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.907533   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:48.907723   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.907885   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:48.908000   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:40:48.989519   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 18:40:49.017669   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 18:40:49.044958   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 18:40:49.072162   15893 provision.go:87] duration metric: took 239.219762ms to configureAuth
	I0429 18:40:49.072191   15893 buildroot.go:189] setting minikube options for container-runtime
	I0429 18:40:49.072396   15893 config.go:182] Loaded profile config "addons-412183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:40:49.072483   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:49.074946   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.075265   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.075294   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.075482   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:49.075659   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.075820   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.075924   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:49.076070   15893 main.go:141] libmachine: Using SSH client type: native
	I0429 18:40:49.076226   15893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0429 18:40:49.076241   15893 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 18:40:49.350495   15893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 18:40:49.350516   15893 main.go:141] libmachine: Checking connection to Docker...
	I0429 18:40:49.350523   15893 main.go:141] libmachine: (addons-412183) Calling .GetURL
	I0429 18:40:49.351929   15893 main.go:141] libmachine: (addons-412183) DBG | Using libvirt version 6000000
	I0429 18:40:49.354022   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.354364   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.354397   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.354581   15893 main.go:141] libmachine: Docker is up and running!
	I0429 18:40:49.354595   15893 main.go:141] libmachine: Reticulating splines...
	I0429 18:40:49.354602   15893 client.go:171] duration metric: took 27.484392148s to LocalClient.Create
	I0429 18:40:49.354629   15893 start.go:167] duration metric: took 27.48445816s to libmachine.API.Create "addons-412183"
	I0429 18:40:49.354643   15893 start.go:293] postStartSetup for "addons-412183" (driver="kvm2")
	I0429 18:40:49.354655   15893 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 18:40:49.354677   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:49.354886   15893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 18:40:49.354920   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:49.357108   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.357466   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.357494   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.357640   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:49.357805   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.357929   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:49.358036   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:40:49.441961   15893 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 18:40:49.447082   15893 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 18:40:49.447107   15893 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 18:40:49.447182   15893 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 18:40:49.447213   15893 start.go:296] duration metric: took 92.5635ms for postStartSetup
	I0429 18:40:49.447263   15893 main.go:141] libmachine: (addons-412183) Calling .GetConfigRaw
	I0429 18:40:49.447816   15893 main.go:141] libmachine: (addons-412183) Calling .GetIP
	I0429 18:40:49.450194   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.450546   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.450584   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.450749   15893 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/config.json ...
	I0429 18:40:49.450922   15893 start.go:128] duration metric: took 27.598497909s to createHost
	I0429 18:40:49.450948   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:49.453199   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.453566   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.453599   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.453726   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:49.453865   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.453991   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.454116   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:49.454237   15893 main.go:141] libmachine: Using SSH client type: native
	I0429 18:40:49.454427   15893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0429 18:40:49.454439   15893 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 18:40:49.559223   15893 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714416049.547431399
	
	I0429 18:40:49.559246   15893 fix.go:216] guest clock: 1714416049.547431399
	I0429 18:40:49.559253   15893 fix.go:229] Guest: 2024-04-29 18:40:49.547431399 +0000 UTC Remote: 2024-04-29 18:40:49.450933922 +0000 UTC m=+27.711157503 (delta=96.497477ms)
	I0429 18:40:49.559295   15893 fix.go:200] guest clock delta is within tolerance: 96.497477ms
	I0429 18:40:49.559302   15893 start.go:83] releasing machines lock for "addons-412183", held for 27.706957406s
	I0429 18:40:49.559322   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:49.559563   15893 main.go:141] libmachine: (addons-412183) Calling .GetIP
	I0429 18:40:49.561992   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.562344   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.562365   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.562490   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:49.562972   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:49.563111   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:49.563201   15893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 18:40:49.563244   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:49.563287   15893 ssh_runner.go:195] Run: cat /version.json
	I0429 18:40:49.563307   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:49.565464   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.565633   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.565754   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.565777   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.565910   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:49.566023   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.566036   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.566058   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.566194   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:49.566206   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:49.566369   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.566371   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:40:49.566534   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:49.566656   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:40:49.668171   15893 ssh_runner.go:195] Run: systemctl --version
	I0429 18:40:49.674676   15893 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 18:40:49.836391   15893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 18:40:49.843360   15893 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 18:40:49.843428   15893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 18:40:49.862567   15893 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 18:40:49.862593   15893 start.go:494] detecting cgroup driver to use...
	I0429 18:40:49.862648   15893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 18:40:49.879687   15893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 18:40:49.895108   15893 docker.go:217] disabling cri-docker service (if available) ...
	I0429 18:40:49.895169   15893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 18:40:49.910287   15893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 18:40:49.925348   15893 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 18:40:50.048526   15893 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 18:40:50.195718   15893 docker.go:233] disabling docker service ...
	I0429 18:40:50.195788   15893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 18:40:50.210506   15893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 18:40:50.224510   15893 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 18:40:50.375508   15893 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 18:40:50.503250   15893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 18:40:50.518644   15893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 18:40:50.539610   15893 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 18:40:50.539676   15893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.551371   15893 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 18:40:50.551438   15893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.563221   15893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.574936   15893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.586668   15893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 18:40:50.599929   15893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.612816   15893 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.633021   15893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.644592   15893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 18:40:50.654522   15893 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 18:40:50.654580   15893 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 18:40:50.669083   15893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 18:40:50.680207   15893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:40:50.826021   15893 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 18:40:50.981738   15893 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 18:40:50.981820   15893 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 18:40:50.987042   15893 start.go:562] Will wait 60s for crictl version
	I0429 18:40:50.987136   15893 ssh_runner.go:195] Run: which crictl
	I0429 18:40:50.991562   15893 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 18:40:51.033836   15893 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 18:40:51.033962   15893 ssh_runner.go:195] Run: crio --version
	I0429 18:40:51.063703   15893 ssh_runner.go:195] Run: crio --version
	I0429 18:40:51.097696   15893 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 18:40:51.099168   15893 main.go:141] libmachine: (addons-412183) Calling .GetIP
	I0429 18:40:51.101786   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:51.102130   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:51.102154   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:51.102336   15893 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 18:40:51.106850   15893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 18:40:51.121445   15893 kubeadm.go:877] updating cluster {Name:addons-412183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-412183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 18:40:51.121569   15893 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 18:40:51.121638   15893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 18:40:51.157154   15893 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 18:40:51.157220   15893 ssh_runner.go:195] Run: which lz4
	I0429 18:40:51.161632   15893 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 18:40:51.166349   15893 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 18:40:51.166379   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 18:40:52.775195   15893 crio.go:462] duration metric: took 1.613587682s to copy over tarball
	I0429 18:40:52.775271   15893 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 18:40:55.453183   15893 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.677878186s)
	I0429 18:40:55.453225   15893 crio.go:469] duration metric: took 2.677995586s to extract the tarball
	I0429 18:40:55.453234   15893 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 18:40:55.493518   15893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 18:40:55.540670   15893 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 18:40:55.540695   15893 cache_images.go:84] Images are preloaded, skipping loading
	I0429 18:40:55.540714   15893 kubeadm.go:928] updating node { 192.168.39.105 8443 v1.30.0 crio true true} ...
	I0429 18:40:55.540856   15893 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-412183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-412183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 18:40:55.540921   15893 ssh_runner.go:195] Run: crio config
	I0429 18:40:55.587090   15893 cni.go:84] Creating CNI manager for ""
	I0429 18:40:55.587116   15893 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 18:40:55.587127   15893 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 18:40:55.587146   15893 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.105 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-412183 NodeName:addons-412183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 18:40:55.587292   15893 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-412183"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 18:40:55.587347   15893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 18:40:55.599220   15893 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 18:40:55.599328   15893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 18:40:55.610609   15893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0429 18:40:55.629922   15893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 18:40:55.649794   15893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0429 18:40:55.668119   15893 ssh_runner.go:195] Run: grep 192.168.39.105	control-plane.minikube.internal$ /etc/hosts
	I0429 18:40:55.672376   15893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 18:40:55.686104   15893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:40:55.819811   15893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 18:40:55.841760   15893 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183 for IP: 192.168.39.105
	I0429 18:40:55.841785   15893 certs.go:194] generating shared ca certs ...
	I0429 18:40:55.841800   15893 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:55.841931   15893 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 18:40:56.018106   15893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt ...
	I0429 18:40:56.018134   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt: {Name:mk1a90f1f1cee68ee2944530d90bce20d77faff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.018281   15893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key ...
	I0429 18:40:56.018291   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key: {Name:mk8c549bc46400cd1867a972d6452fc361e7555c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.018358   15893 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 18:40:56.243415   15893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt ...
	I0429 18:40:56.243446   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt: {Name:mk037f9ed9a0ba0db804d2da948eeaadeb55e807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.243592   15893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key ...
	I0429 18:40:56.243602   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key: {Name:mk9eca9dab20265def7e00d5b3901d053a7e6b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.243670   15893 certs.go:256] generating profile certs ...
	I0429 18:40:56.243729   15893 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.key
	I0429 18:40:56.243743   15893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt with IP's: []
	I0429 18:40:56.427080   15893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt ...
	I0429 18:40:56.427110   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: {Name:mk45d4f3b66b94530d94e119121be0e39708fbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.427258   15893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.key ...
	I0429 18:40:56.427268   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.key: {Name:mkbfbe12272f10cea48b7ddf6c1b1f5fe0611db9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.427332   15893 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.key.7d7f4af1
	I0429 18:40:56.427349   15893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.crt.7d7f4af1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.105]
	I0429 18:40:56.564420   15893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.crt.7d7f4af1 ...
	I0429 18:40:56.564458   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.crt.7d7f4af1: {Name:mkbc4ad6ce5f1f28dc2d8233d39abccb1153c632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.564606   15893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.key.7d7f4af1 ...
	I0429 18:40:56.564619   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.key.7d7f4af1: {Name:mkdffc6c3c88557574c00993aadbb459913af94f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.564691   15893 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.crt.7d7f4af1 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.crt
	I0429 18:40:56.564757   15893 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.key.7d7f4af1 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.key
	I0429 18:40:56.564800   15893 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.key
	I0429 18:40:56.564815   15893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.crt with IP's: []
	I0429 18:40:56.694779   15893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.crt ...
	I0429 18:40:56.694808   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.crt: {Name:mkf02c29d4dee44c6646830909239c091b8389a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.694971   15893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.key ...
	I0429 18:40:56.694982   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.key: {Name:mk713a023302a5a8d96afc62463fc93cb9b4c09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.695144   15893 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 18:40:56.695179   15893 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 18:40:56.695211   15893 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 18:40:56.695234   15893 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 18:40:56.695792   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 18:40:56.743184   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 18:40:56.777073   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 18:40:56.808425   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 18:40:56.981611   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 18:40:57.015328   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 18:40:57.043949   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 18:40:57.071639   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 18:40:57.099014   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 18:40:57.126315   15893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 18:40:57.145314   15893 ssh_runner.go:195] Run: openssl version
	I0429 18:40:57.152645   15893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 18:40:57.166077   15893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:40:57.171455   15893 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:40:57.171510   15893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:40:57.177897   15893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 18:40:57.190953   15893 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 18:40:57.196071   15893 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 18:40:57.196131   15893 kubeadm.go:391] StartCluster: {Name:addons-412183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-412183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:40:57.196217   15893 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 18:40:57.196274   15893 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 18:40:57.248603   15893 cri.go:89] found id: ""
	I0429 18:40:57.248681   15893 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 18:40:57.262452   15893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 18:40:57.275795   15893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 18:40:57.289005   15893 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 18:40:57.289026   15893 kubeadm.go:156] found existing configuration files:
	
	I0429 18:40:57.289067   15893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 18:40:57.301793   15893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 18:40:57.301867   15893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 18:40:57.313045   15893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 18:40:57.325893   15893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 18:40:57.325948   15893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 18:40:57.339064   15893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 18:40:57.351680   15893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 18:40:57.351741   15893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 18:40:57.365076   15893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 18:40:57.375858   15893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 18:40:57.375914   15893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 18:40:57.392796   15893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 18:40:57.476884   15893 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 18:40:57.477006   15893 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 18:40:57.606677   15893 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 18:40:57.606831   15893 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 18:40:57.606954   15893 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 18:40:57.840308   15893 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 18:40:57.842925   15893 out.go:204]   - Generating certificates and keys ...
	I0429 18:40:57.843032   15893 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 18:40:57.843094   15893 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 18:40:57.896000   15893 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 18:40:57.960496   15893 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 18:40:58.086864   15893 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 18:40:58.268463   15893 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 18:40:58.422194   15893 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 18:40:58.422522   15893 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-412183 localhost] and IPs [192.168.39.105 127.0.0.1 ::1]
	I0429 18:40:58.719479   15893 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 18:40:58.719688   15893 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-412183 localhost] and IPs [192.168.39.105 127.0.0.1 ::1]
	I0429 18:40:58.965382   15893 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 18:40:59.500473   15893 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 18:40:59.714871   15893 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 18:40:59.715115   15893 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 18:40:59.789974   15893 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 18:41:00.127269   15893 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 18:41:00.336120   15893 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 18:41:00.510010   15893 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 18:41:00.731591   15893 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 18:41:00.732177   15893 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 18:41:00.734508   15893 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 18:41:00.736683   15893 out.go:204]   - Booting up control plane ...
	I0429 18:41:00.736765   15893 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 18:41:00.736873   15893 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 18:41:00.736967   15893 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 18:41:00.752765   15893 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 18:41:00.753753   15893 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 18:41:00.753884   15893 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 18:41:00.883211   15893 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 18:41:00.883299   15893 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 18:41:01.882675   15893 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001253216s
	I0429 18:41:01.882792   15893 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 18:41:06.883520   15893 kubeadm.go:309] [api-check] The API server is healthy after 5.001576531s
	I0429 18:41:06.896477   15893 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 18:41:06.915477   15893 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 18:41:06.945331   15893 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 18:41:06.945540   15893 kubeadm.go:309] [mark-control-plane] Marking the node addons-412183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 18:41:06.963141   15893 kubeadm.go:309] [bootstrap-token] Using token: tncb7l.y1ni0jeig8r3do1i
	I0429 18:41:06.964664   15893 out.go:204]   - Configuring RBAC rules ...
	I0429 18:41:06.964804   15893 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 18:41:06.970908   15893 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 18:41:06.982087   15893 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 18:41:06.985568   15893 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 18:41:06.988922   15893 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 18:41:06.992743   15893 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 18:41:07.290804   15893 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 18:41:07.746972   15893 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 18:41:08.290334   15893 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 18:41:08.291207   15893 kubeadm.go:309] 
	I0429 18:41:08.291298   15893 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 18:41:08.291318   15893 kubeadm.go:309] 
	I0429 18:41:08.291414   15893 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 18:41:08.291423   15893 kubeadm.go:309] 
	I0429 18:41:08.291460   15893 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 18:41:08.291523   15893 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 18:41:08.291604   15893 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 18:41:08.291619   15893 kubeadm.go:309] 
	I0429 18:41:08.291681   15893 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 18:41:08.291695   15893 kubeadm.go:309] 
	I0429 18:41:08.291770   15893 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 18:41:08.291782   15893 kubeadm.go:309] 
	I0429 18:41:08.291861   15893 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 18:41:08.291970   15893 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 18:41:08.292069   15893 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 18:41:08.292077   15893 kubeadm.go:309] 
	I0429 18:41:08.292191   15893 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 18:41:08.292320   15893 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 18:41:08.292339   15893 kubeadm.go:309] 
	I0429 18:41:08.292451   15893 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token tncb7l.y1ni0jeig8r3do1i \
	I0429 18:41:08.292613   15893 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 18:41:08.292647   15893 kubeadm.go:309] 	--control-plane 
	I0429 18:41:08.292662   15893 kubeadm.go:309] 
	I0429 18:41:08.292799   15893 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 18:41:08.292812   15893 kubeadm.go:309] 
	I0429 18:41:08.292916   15893 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token tncb7l.y1ni0jeig8r3do1i \
	I0429 18:41:08.293051   15893 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 18:41:08.293448   15893 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 18:41:08.293479   15893 cni.go:84] Creating CNI manager for ""
	I0429 18:41:08.293488   15893 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 18:41:08.296216   15893 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 18:41:08.297502   15893 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 18:41:08.319993   15893 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 18:41:08.343987   15893 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 18:41:08.344112   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:08.344137   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-412183 minikube.k8s.io/updated_at=2024_04_29T18_41_08_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=addons-412183 minikube.k8s.io/primary=true
	I0429 18:41:08.377561   15893 ops.go:34] apiserver oom_adj: -16
	I0429 18:41:08.556962   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:09.057203   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:09.557617   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:10.057573   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:10.557307   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:11.057396   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:11.557156   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:12.057282   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:12.557300   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:13.057782   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:13.558021   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:14.057394   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:14.557767   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:15.057547   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:15.557067   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:16.057919   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:16.557626   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:17.057892   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:17.557030   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:18.057967   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:18.557612   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:19.057819   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:19.557033   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:20.057140   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:20.557148   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:21.057435   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:21.557542   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:21.721313   15893 kubeadm.go:1107] duration metric: took 13.377266125s to wait for elevateKubeSystemPrivileges
	W0429 18:41:21.721349   15893 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 18:41:21.721357   15893 kubeadm.go:393] duration metric: took 24.525231154s to StartCluster
	I0429 18:41:21.721373   15893 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:41:21.721494   15893 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:41:21.721842   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:41:21.722024   15893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 18:41:21.722040   15893 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0429 18:41:21.722023   15893 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 18:41:21.723947   15893 out.go:177] * Verifying Kubernetes components...
	I0429 18:41:21.722154   15893 addons.go:69] Setting yakd=true in profile "addons-412183"
	I0429 18:41:21.722161   15893 addons.go:69] Setting cloud-spanner=true in profile "addons-412183"
	I0429 18:41:21.722166   15893 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-412183"
	I0429 18:41:21.722170   15893 addons.go:69] Setting default-storageclass=true in profile "addons-412183"
	I0429 18:41:21.722174   15893 addons.go:69] Setting gcp-auth=true in profile "addons-412183"
	I0429 18:41:21.722186   15893 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-412183"
	I0429 18:41:21.722194   15893 addons.go:69] Setting registry=true in profile "addons-412183"
	I0429 18:41:21.722202   15893 addons.go:69] Setting storage-provisioner=true in profile "addons-412183"
	I0429 18:41:21.722210   15893 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-412183"
	I0429 18:41:21.722217   15893 addons.go:69] Setting volumesnapshots=true in profile "addons-412183"
	I0429 18:41:21.722211   15893 addons.go:69] Setting metrics-server=true in profile "addons-412183"
	I0429 18:41:21.722209   15893 addons.go:69] Setting ingress=true in profile "addons-412183"
	I0429 18:41:21.722224   15893 addons.go:69] Setting ingress-dns=true in profile "addons-412183"
	I0429 18:41:21.722217   15893 addons.go:69] Setting helm-tiller=true in profile "addons-412183"
	I0429 18:41:21.722229   15893 addons.go:69] Setting inspektor-gadget=true in profile "addons-412183"
	I0429 18:41:21.722241   15893 config.go:182] Loaded profile config "addons-412183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:41:21.725236   15893 mustload.go:65] Loading cluster: addons-412183
	I0429 18:41:21.725255   15893 addons.go:234] Setting addon registry=true in "addons-412183"
	I0429 18:41:21.725266   15893 addons.go:234] Setting addon volumesnapshots=true in "addons-412183"
	I0429 18:41:21.725271   15893 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-412183"
	I0429 18:41:21.725273   15893 addons.go:234] Setting addon yakd=true in "addons-412183"
	I0429 18:41:21.725293   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725295   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725295   15893 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-412183"
	I0429 18:41:21.725311   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725310   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725469   15893 config.go:182] Loaded profile config "addons-412183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:41:21.725563   15893 addons.go:234] Setting addon ingress-dns=true in "addons-412183"
	I0429 18:41:21.725607   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725768   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725778   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725802   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.725813   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725818   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725832   15893 addons.go:234] Setting addon metrics-server=true in "addons-412183"
	I0429 18:41:21.725847   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.725861   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725867   15893 addons.go:234] Setting addon inspektor-gadget=true in "addons-412183"
	I0429 18:41:21.725890   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725940   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725973   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726130   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.726148   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726169   15893 addons.go:234] Setting addon storage-provisioner=true in "addons-412183"
	I0429 18:41:21.726199   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.726224   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726247   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725232   15893 addons.go:234] Setting addon cloud-spanner=true in "addons-412183"
	I0429 18:41:21.726277   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.726279   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726505   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.726526   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726597   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725241   15893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:41:21.726622   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726612   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726225   15893 addons.go:234] Setting addon ingress=true in "addons-412183"
	I0429 18:41:21.726787   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.726988   15893 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-412183"
	I0429 18:41:21.725805   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.727138   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.727155   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726202   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.730509   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.730552   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.725850   15893 addons.go:234] Setting addon helm-tiller=true in "addons-412183"
	I0429 18:41:21.734386   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725279   15893 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-412183"
	I0429 18:41:21.734571   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.734748   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.734774   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.734923   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.734953   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.747372   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I0429 18:41:21.747890   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.748245   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
	I0429 18:41:21.748416   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0429 18:41:21.748470   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0429 18:41:21.748662   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.748847   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.749077   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.749092   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.749111   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.749127   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.749161   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.749326   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.749348   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.750191   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.750211   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.750270   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.750330   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.750342   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.750827   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.750866   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.751139   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.751161   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.751169   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.758536   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.758578   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.758663   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.758696   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.758939   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.758957   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.762910   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0429 18:41:21.763501   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.764071   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41933
	I0429 18:41:21.764377   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.764391   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.764752   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.765309   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.765344   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.770161   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.770231   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37191
	I0429 18:41:21.770633   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.770982   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.770999   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.771118   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.771128   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.771458   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.772042   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.772078   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.777703   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I0429 18:41:21.777717   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.777705   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0429 18:41:21.778209   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.778453   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.778488   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.778734   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.778750   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.779115   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.779171   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.779367   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.783796   15893 addons.go:234] Setting addon default-storageclass=true in "addons-412183"
	I0429 18:41:21.783839   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.784190   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.784239   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.786363   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.787904   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.788262   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.790388   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0429 18:41:21.790506   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.790592   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.790872   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.791497   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.791513   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.791857   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.792379   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.792415   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.800584   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35409
	I0429 18:41:21.801196   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.801927   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.801950   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.803023   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.805600   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.807505   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.809412   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0429 18:41:21.810712   15893 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0429 18:41:21.810730   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0429 18:41:21.810752   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.810847   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0429 18:41:21.810926   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0429 18:41:21.811343   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.811345   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.811869   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.811889   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.812031   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.812043   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.812413   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.812630   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.813650   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.814537   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.814579   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.818366   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.818375   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0429 18:41:21.818404   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.818369   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.818427   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.818445   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.818798   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.818841   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.818908   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.819174   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.819344   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.819357   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.819798   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.819822   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.819967   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.820263   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42807
	I0429 18:41:21.820639   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.820682   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.820767   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42779
	I0429 18:41:21.821161   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.821227   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.821694   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.821713   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.821999   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34967
	I0429 18:41:21.822183   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.822725   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.822770   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.823047   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.823258   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0429 18:41:21.823615   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.823631   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.824003   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.824213   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41701
	I0429 18:41:21.824405   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.824743   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.824764   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.825078   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.825620   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.825637   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.825693   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.826002   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.826113   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.828294   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0429 18:41:21.828339   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32771
	I0429 18:41:21.826874   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.827008   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.826557   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.829460   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0429 18:41:21.829675   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.829688   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0429 18:41:21.831041   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0429 18:41:21.830121   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.830241   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.830345   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.830431   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.831527   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.833757   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0429 18:41:21.832960   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.833228   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.833740   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0429 18:41:21.834609   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.834994   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.836104   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0429 18:41:21.837455   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0429 18:41:21.838601   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0429 18:41:21.837456   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0429 18:41:21.837486   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.837202   15893 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0429 18:41:21.837515   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.837188   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.837866   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.842857   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0429 18:41:21.843878   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0429 18:41:21.843892   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0429 18:41:21.843913   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.840207   15893 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-412183"
	I0429 18:41:21.843993   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.844382   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.844421   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.844471   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0429 18:41:21.844585   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46283
	I0429 18:41:21.844589   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.844608   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.844674   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.844717   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.845130   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.845184   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.845221   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.845734   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.846305   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.846344   15893 out.go:177]   - Using image docker.io/registry:2.8.3
	I0429 18:41:21.847434   15893 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0429 18:41:21.847403   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.845668   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.846636   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.847133   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.847422   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0429 18:41:21.848209   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.848498   15893 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 18:41:21.849541   15893 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 18:41:21.851047   15893 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 18:41:21.851064   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0429 18:41:21.851082   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.849601   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.851145   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.851168   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.848580   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.848635   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.849227   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.851224   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.852885   15893 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0429 18:41:21.848524   15893 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0429 18:41:21.854386   15893 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0429 18:41:21.854401   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0429 18:41:21.850490   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.854420   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.850680   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.851582   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.852077   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.852476   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.852912   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0429 18:41:21.854517   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.850153   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.855179   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.855214   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.855273   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.856477   15893 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0429 18:41:21.855536   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.855567   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.856517   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.855689   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.856237   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.856614   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.857934   15893 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 18:41:21.857952   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0429 18:41:21.857966   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.856750   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.856907   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.857090   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.857545   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.857687   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.859236   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.859252   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.859334   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.859376   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.860206   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40347
	I0429 18:41:21.861708   15893 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0429 18:41:21.860417   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.860666   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.860710   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.860863   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.861157   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.861633   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.861685   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.862295   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45773
	I0429 18:41:21.862708   15893 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0429 18:41:21.863717   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0429 18:41:21.863747   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.863345   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0429 18:41:21.863781   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.863696   15893 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0429 18:41:21.866259   15893 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 18:41:21.866272   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0429 18:41:21.866285   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.863504   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.866328   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.866347   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.863928   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.866359   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.863951   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.865352   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.865377   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.866400   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.867749   15893 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0429 18:41:21.865440   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.865446   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.865603   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.865655   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.866115   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.866980   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.867014   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.868978   15893 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 18:41:21.868991   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 18:41:21.869005   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.870272   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.870292   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.870272   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.871770   15893 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0429 18:41:21.870411   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.870412   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.870434   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.870431   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.870764   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.870790   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.871693   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.871870   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.872264   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0429 18:41:21.872540   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.873062   15893 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0429 18:41:21.873074   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0429 18:41:21.872871   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.873089   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.873119   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.873137   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.873159   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.873171   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.873180   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.873203   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.873703   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.873739   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.873812   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.873821   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.875229   15893 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	I0429 18:41:21.873712   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.874594   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.875260   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.876738   15893 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0429 18:41:21.874616   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.876755   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0429 18:41:21.876771   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.874637   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.874789   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.875155   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.876857   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.876880   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.875461   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.875904   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.875953   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.877076   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.877117   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.877162   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.877347   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.877377   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.877539   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.877706   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.877756   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.878177   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.878198   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.878376   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.878597   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.878808   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.878946   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.879385   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.879596   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.879714   15893 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 18:41:21.879724   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 18:41:21.879738   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.879824   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37665
	I0429 18:41:21.881382   15893 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 18:41:21.880170   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.881203   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.881846   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.882782   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.882881   15893 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 18:41:21.882890   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 18:41:21.882900   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.882927   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.882948   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.883051   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.883220   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.883245   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.883267   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.883442   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.883624   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.883676   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.883771   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.883790   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.883878   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.884012   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.884239   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.884747   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.884771   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.885843   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.886171   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.886201   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.886285   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.886444   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.886586   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.886702   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.913521   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37089
	I0429 18:41:21.913880   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.914317   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.914343   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.914644   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.914824   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.916181   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.918233   15893 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0429 18:41:21.919669   15893 out.go:177]   - Using image docker.io/busybox:stable
	I0429 18:41:21.921026   15893 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 18:41:21.921048   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0429 18:41:21.921071   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.924047   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.924435   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.924466   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.924706   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.924892   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.925043   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.925218   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	W0429 18:41:21.932807   15893 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59772->192.168.39.105:22: read: connection reset by peer
	I0429 18:41:21.932841   15893 retry.go:31] will retry after 310.484288ms: ssh: handshake failed: read tcp 192.168.39.1:59772->192.168.39.105:22: read: connection reset by peer
	I0429 18:41:22.125012   15893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 18:41:22.125238   15893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 18:41:22.149904   15893 node_ready.go:35] waiting up to 6m0s for node "addons-412183" to be "Ready" ...
	I0429 18:41:22.153523   15893 node_ready.go:49] node "addons-412183" has status "Ready":"True"
	I0429 18:41:22.153543   15893 node_ready.go:38] duration metric: took 3.606629ms for node "addons-412183" to be "Ready" ...
	I0429 18:41:22.153551   15893 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 18:41:22.160646   15893 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace to be "Ready" ...
	I0429 18:41:22.235331   15893 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0429 18:41:22.235351   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0429 18:41:22.272339   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0429 18:41:22.273946   15893 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 18:41:22.273968   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0429 18:41:22.323899   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0429 18:41:22.323924   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0429 18:41:22.323922   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 18:41:22.328194   15893 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0429 18:41:22.328215   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0429 18:41:22.331910   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 18:41:22.334742   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 18:41:22.337692   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 18:41:22.357069   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 18:41:22.380840   15893 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0429 18:41:22.380862   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0429 18:41:22.394920   15893 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0429 18:41:22.394936   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0429 18:41:22.425132   15893 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0429 18:41:22.425154   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0429 18:41:22.428657   15893 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0429 18:41:22.428672   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0429 18:41:22.448111   15893 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 18:41:22.448130   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 18:41:22.491871   15893 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0429 18:41:22.491892   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0429 18:41:22.530852   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0429 18:41:22.530879   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0429 18:41:22.583204   15893 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 18:41:22.583238   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0429 18:41:22.632671   15893 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 18:41:22.632691   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 18:41:22.654837   15893 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0429 18:41:22.654860   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0429 18:41:22.660951   15893 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0429 18:41:22.660965   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0429 18:41:22.688738   15893 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0429 18:41:22.688763   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0429 18:41:22.731691   15893 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0429 18:41:22.731714   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0429 18:41:22.739466   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0429 18:41:22.739490   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0429 18:41:22.761664   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 18:41:22.834889   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 18:41:22.881900   15893 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0429 18:41:22.881934   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0429 18:41:22.885734   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0429 18:41:22.889297   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 18:41:22.923371   15893 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0429 18:41:22.923406   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0429 18:41:22.968770   15893 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0429 18:41:22.968794   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0429 18:41:22.976719   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0429 18:41:22.976738   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0429 18:41:23.044928   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0429 18:41:23.109976   15893 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0429 18:41:23.110011   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0429 18:41:23.258918   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0429 18:41:23.258940   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0429 18:41:23.292363   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0429 18:41:23.292407   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0429 18:41:23.409090   15893 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0429 18:41:23.409117   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0429 18:41:23.578598   15893 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0429 18:41:23.578621   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0429 18:41:23.631783   15893 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 18:41:23.631802   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0429 18:41:23.663329   15893 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 18:41:23.663353   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0429 18:41:23.848788   15893 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.723515667s)
	I0429 18:41:23.848816   15893 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0429 18:41:23.871083   15893 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0429 18:41:23.871104   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0429 18:41:24.043826   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 18:41:24.052737   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 18:41:24.077668   15893 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0429 18:41:24.077696   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0429 18:41:24.175836   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:24.354341   15893 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-412183" context rescaled to 1 replicas
	I0429 18:41:24.494637   15893 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0429 18:41:24.494665   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0429 18:41:24.699452   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.427070065s)
	I0429 18:41:24.699505   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:24.699518   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:24.699840   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:24.699865   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:24.699876   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:24.699889   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:24.699897   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:24.700162   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:24.700168   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:24.700191   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:24.808157   15893 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 18:41:24.808181   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0429 18:41:25.054848   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 18:41:26.194291   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:28.247646   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:28.897992   15893 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0429 18:41:28.898029   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:28.901259   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:28.901723   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:28.901754   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:28.901954   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:28.902165   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:28.902322   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:28.902463   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:29.424833   15893 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0429 18:41:29.737797   15893 addons.go:234] Setting addon gcp-auth=true in "addons-412183"
	I0429 18:41:29.737919   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:29.738277   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:29.738311   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:29.755386   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I0429 18:41:29.755871   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:29.756385   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:29.756413   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:29.756732   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:29.757181   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:29.757207   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:29.773273   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0429 18:41:29.773733   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:29.774245   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:29.774276   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:29.774608   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:29.774766   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:29.776406   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:29.776620   15893 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0429 18:41:29.776641   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:29.779418   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:29.779771   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:29.779802   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:29.779981   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:29.780168   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:29.780296   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:29.780454   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:30.378080   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:31.852557   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.528599414s)
	I0429 18:41:31.852627   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.852645   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.852620   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.520679632s)
	I0429 18:41:31.852706   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.852714   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.852738   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.515029952s)
	I0429 18:41:31.853054   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.963734001s)
	I0429 18:41:31.853072   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853076   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853084   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.853089   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.852711   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.51794703s)
	I0429 18:41:31.853152   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853160   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.853163   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.80819231s)
	I0429 18:41:31.852819   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.495714452s)
	I0429 18:41:31.853191   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853199   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853202   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.853206   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.852839   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.853241   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.853250   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853257   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.852890   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.091200902s)
	I0429 18:41:31.853295   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853302   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.853347   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.809484017s)
	W0429 18:41:31.853377   15893 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 18:41:31.853399   15893 retry.go:31] will retry after 298.609511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 18:41:31.852942   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.018027685s)
	I0429 18:41:31.853466   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.852983   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.967224213s)
	I0429 18:41:31.853474   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.800670046s)
	I0429 18:41:31.853493   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853497   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853502   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.853509   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.852990   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.853520   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.853529   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.852994   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.853017   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.853476   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.853536   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858097   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858107   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858118   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858131   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858136   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858136   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858147   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858154   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858162   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858172   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858176   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858179   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858183   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858188   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858211   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858215   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858225   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858233   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858235   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858240   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858245   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858254   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858267   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858172   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858275   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858283   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858288   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858293   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858311   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858332   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858150   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858345   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858350   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858355   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858360   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858246   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858272   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858369   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858283   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858140   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858360   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858387   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858377   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858398   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858400   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858407   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858416   15893 addons.go:470] Verifying addon ingress=true in "addons-412183"
	I0429 18:41:31.858469   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.861951   15893 out.go:177] * Verifying ingress addon...
	I0429 18:41:31.858388   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858497   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858555   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858605   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858391   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858634   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858683   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858822   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858832   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858854   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858872   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858885   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858923   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.859269   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.859293   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.863597   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.863602   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.863628   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.863662   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.863712   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.863722   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.863630   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.863741   15893 addons.go:470] Verifying addon registry=true in "addons-412183"
	I0429 18:41:31.863714   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.863639   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.865661   15893 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-412183 service yakd-dashboard -n yakd-dashboard
	
	I0429 18:41:31.863664   15893 addons.go:470] Verifying addon metrics-server=true in "addons-412183"
	I0429 18:41:31.864051   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.864071   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.864071   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.864118   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.864524   15893 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0429 18:41:31.868446   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.868463   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.868478   15893 out.go:177] * Verifying registry addon...
	I0429 18:41:31.870416   15893 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0429 18:41:31.905112   15893 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0429 18:41:31.905155   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:31.914142   15893 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 18:41:31.914169   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:31.926881   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.926907   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.927186   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.927204   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.927209   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	W0429 18:41:31.927300   15893 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0429 18:41:31.943415   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.943434   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.943821   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.943833   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.943846   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:32.152366   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 18:41:32.386127   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:32.415620   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:32.593435   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.538531416s)
	I0429 18:41:32.593487   15893 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.816849105s)
	I0429 18:41:32.595121   15893 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 18:41:32.593487   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:32.596375   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:32.597495   15893 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0429 18:41:32.596671   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:32.598632   15893 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0429 18:41:32.598646   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0429 18:41:32.597524   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:32.598726   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:32.598742   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:32.596702   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:32.599081   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:32.599096   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:32.599119   15893 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-412183"
	I0429 18:41:32.600387   15893 out.go:177] * Verifying csi-hostpath-driver addon...
	I0429 18:41:32.602386   15893 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0429 18:41:32.619768   15893 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 18:41:32.619790   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:32.689963   15893 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0429 18:41:32.689983   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0429 18:41:32.703078   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:32.785280   15893 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 18:41:32.785302   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0429 18:41:32.815514   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 18:41:32.884880   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:32.885158   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:33.116203   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:33.378314   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:33.381800   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:33.609846   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:33.879052   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:33.879561   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:34.108343   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:34.386862   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:34.387006   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:34.596574   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.444160633s)
	I0429 18:41:34.596628   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:34.596641   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:34.596899   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:34.596919   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:34.596929   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:34.596936   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:34.597159   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:34.597204   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:34.597211   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:34.616987   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:34.809005   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.993454332s)
	I0429 18:41:34.809051   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:34.809063   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:34.809388   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:34.809405   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:34.809427   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:34.809480   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:34.809505   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:34.809771   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:34.809823   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:34.809792   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:34.811630   15893 addons.go:470] Verifying addon gcp-auth=true in "addons-412183"
	I0429 18:41:34.813722   15893 out.go:177] * Verifying gcp-auth addon...
	I0429 18:41:34.815639   15893 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0429 18:41:34.845113   15893 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0429 18:41:34.845139   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:34.888590   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:34.910594   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:35.116310   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:35.174427   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:35.325794   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:35.379249   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:35.379710   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:35.610944   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:35.819973   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:35.872912   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:35.875481   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:36.108705   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:36.319861   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:36.372606   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:36.376367   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:36.609295   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:36.822386   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:36.872563   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:36.876084   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:37.109466   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:37.320651   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:37.373401   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:37.374670   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:37.608612   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:37.667400   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:37.820198   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:37.874462   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:37.876983   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:38.108403   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:38.320108   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:38.373200   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:38.377576   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:38.608033   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:38.830396   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:38.882595   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:38.882754   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:39.108334   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:39.320238   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:39.374251   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:39.376285   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:39.628752   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:39.667870   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:39.822009   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:39.872673   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:39.876399   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:40.111373   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:40.329710   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:40.378545   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:40.383862   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:40.608516   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:40.820460   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:40.873263   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:40.875885   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:41.109185   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:41.319765   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:41.373535   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:41.376955   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:41.608485   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:41.819723   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:41.872984   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:41.876324   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:42.108383   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:42.168269   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:42.320186   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:42.374619   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:42.376550   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:42.608886   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:42.819924   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:42.872500   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:42.875796   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:43.109248   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:43.321505   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:43.374291   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:43.375664   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:43.608160   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:43.819734   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:43.873499   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:43.876243   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:44.137939   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:44.168380   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:44.320880   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:44.373324   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:44.375225   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:44.610217   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:44.819958   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:44.874598   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:44.877704   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:45.110330   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:45.320461   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:45.376075   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:45.382396   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:45.609287   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:45.819998   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:45.873620   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:45.875945   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:46.110654   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:46.173378   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:46.320147   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:46.374740   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:46.377708   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:46.609821   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:46.819360   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:46.873810   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:46.880959   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:47.109093   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:47.320414   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:47.374961   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:47.376267   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:47.609385   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:47.819465   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:47.875396   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:47.879909   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:48.109395   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:48.320437   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:48.712989   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:48.715484   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:48.716631   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:48.717716   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:48.831230   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:48.873144   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:48.876107   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:49.108972   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:49.320297   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:49.373287   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:49.376098   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:49.608911   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:49.819032   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:49.883145   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:49.883211   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:50.108364   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:50.319823   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:50.372610   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:50.375650   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:50.611469   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:51.201081   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:51.201610   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:51.205536   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:51.205574   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:51.206852   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:51.319588   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:51.377338   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:51.377395   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:51.608971   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:51.819757   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:51.873780   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:51.875290   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:52.113784   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:52.319874   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:52.373889   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:52.375669   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:52.609126   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:52.819744   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:52.873148   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:52.874636   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:53.108698   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:53.320209   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:53.373557   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:53.376251   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:53.608927   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:53.673375   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:53.819885   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:53.873418   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:53.876802   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:54.108482   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:54.320192   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:54.373321   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:54.376620   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:54.609693   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:54.820157   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:54.873978   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:54.876724   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:55.108595   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:55.319170   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:55.375499   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:55.377357   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:55.611225   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:55.820071   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:55.874336   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:55.887670   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:56.108718   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:56.170812   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:56.319124   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:56.373706   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:56.375954   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:56.609652   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:56.819905   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:56.873029   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:56.877833   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:57.109005   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:57.320153   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:57.384517   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:57.387771   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:57.608593   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:57.819563   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:57.874690   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:57.875182   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:58.109084   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:58.320431   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:58.374450   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:58.377976   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:58.608597   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:58.680754   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:58.820591   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:58.875721   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:58.876602   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:59.109625   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:59.330331   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:59.374858   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:59.376332   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:59.609368   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:59.820137   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:59.874020   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:59.875668   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:00.111412   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:00.321186   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:00.373492   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:00.380546   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:00.614160   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:00.820131   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:00.878827   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:00.879626   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:01.109100   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:01.168450   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:42:01.320124   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:01.374477   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:01.377514   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:01.613638   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:01.819614   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:01.873882   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:01.877285   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:02.108378   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:02.319134   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:02.373307   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:02.376965   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:02.608587   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:02.820396   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:02.877832   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:02.880875   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:03.108458   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:03.319320   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:03.374011   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:03.376558   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:04.063012   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:04.068729   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:04.078163   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:04.093598   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:04.097202   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:42:04.110507   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:04.172134   15893 pod_ready.go:92] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"True"
	I0429 18:42:04.172166   15893 pod_ready.go:81] duration metric: took 42.011497187s for pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.172180   15893 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hx6q4" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.184492   15893 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-hx6q4" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-hx6q4" not found
	I0429 18:42:04.184518   15893 pod_ready.go:81] duration metric: took 12.331113ms for pod "coredns-7db6d8ff4d-hx6q4" in "kube-system" namespace to be "Ready" ...
	E0429 18:42:04.184528   15893 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-hx6q4" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-hx6q4" not found
	I0429 18:42:04.184536   15893 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.192453   15893 pod_ready.go:92] pod "etcd-addons-412183" in "kube-system" namespace has status "Ready":"True"
	I0429 18:42:04.192479   15893 pod_ready.go:81] duration metric: took 7.936712ms for pod "etcd-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.192488   15893 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.201460   15893 pod_ready.go:92] pod "kube-apiserver-addons-412183" in "kube-system" namespace has status "Ready":"True"
	I0429 18:42:04.201490   15893 pod_ready.go:81] duration metric: took 8.993998ms for pod "kube-apiserver-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.201502   15893 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.221988   15893 pod_ready.go:92] pod "kube-controller-manager-addons-412183" in "kube-system" namespace has status "Ready":"True"
	I0429 18:42:04.222011   15893 pod_ready.go:81] duration metric: took 20.501343ms for pod "kube-controller-manager-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.222021   15893 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xsvwz" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.319805   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:04.373446   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:04.376704   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:04.481317   15893 pod_ready.go:92] pod "kube-proxy-xsvwz" in "kube-system" namespace has status "Ready":"True"
	I0429 18:42:04.481346   15893 pod_ready.go:81] duration metric: took 259.317996ms for pod "kube-proxy-xsvwz" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.481361   15893 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.611975   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:04.820115   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:04.874235   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:04.876474   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:04.881534   15893 pod_ready.go:92] pod "kube-scheduler-addons-412183" in "kube-system" namespace has status "Ready":"True"
	I0429 18:42:04.881560   15893 pod_ready.go:81] duration metric: took 400.191017ms for pod "kube-scheduler-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.881572   15893 pod_ready.go:38] duration metric: took 42.728010442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 18:42:04.881596   15893 api_server.go:52] waiting for apiserver process to appear ...
	I0429 18:42:04.881659   15893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 18:42:04.918303   15893 api_server.go:72] duration metric: took 43.196146755s to wait for apiserver process to appear ...
	I0429 18:42:04.918332   15893 api_server.go:88] waiting for apiserver healthz status ...
	I0429 18:42:04.918363   15893 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0429 18:42:04.922691   15893 api_server.go:279] https://192.168.39.105:8443/healthz returned 200:
	ok
	I0429 18:42:04.923645   15893 api_server.go:141] control plane version: v1.30.0
	I0429 18:42:04.923670   15893 api_server.go:131] duration metric: took 5.331478ms to wait for apiserver health ...
	I0429 18:42:04.923680   15893 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 18:42:05.088592   15893 system_pods.go:59] 18 kube-system pods found
	I0429 18:42:05.088629   15893 system_pods.go:61] "coredns-7db6d8ff4d-2xt85" [ff070716-6e1d-4ac4-96c7-fa6eb4105594] Running
	I0429 18:42:05.088638   15893 system_pods.go:61] "csi-hostpath-attacher-0" [55526fb3-ae23-4b9e-a7e0-4a8b11e45754] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0429 18:42:05.088644   15893 system_pods.go:61] "csi-hostpath-resizer-0" [489ad110-3b06-480c-96f2-91d6b34e7be8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0429 18:42:05.088651   15893 system_pods.go:61] "csi-hostpathplugin-hgrqx" [2fc787b6-d8f6-4a9d-b816-98ddc0f65eab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0429 18:42:05.088656   15893 system_pods.go:61] "etcd-addons-412183" [8bc479ae-8648-452e-8244-8940efb5b98e] Running
	I0429 18:42:05.088662   15893 system_pods.go:61] "kube-apiserver-addons-412183" [6af7dd3d-3217-488e-96e5-d2597f1eb0e9] Running
	I0429 18:42:05.088665   15893 system_pods.go:61] "kube-controller-manager-addons-412183" [14d64bbb-9a33-4024-8064-8fbb67abc597] Running
	I0429 18:42:05.088669   15893 system_pods.go:61] "kube-ingress-dns-minikube" [3ea4da73-e176-41ea-be8d-a33571308b0c] Running
	I0429 18:42:05.088672   15893 system_pods.go:61] "kube-proxy-xsvwz" [c22033d6-3278-412b-8d58-ae73835285fd] Running
	I0429 18:42:05.088678   15893 system_pods.go:61] "kube-scheduler-addons-412183" [f032228f-858a-4f5a-a47c-9b8cd62a0593] Running
	I0429 18:42:05.088683   15893 system_pods.go:61] "metrics-server-c59844bb4-xbdnx" [0d97597b-550d-4b86-850f-8b839281a545] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 18:42:05.088693   15893 system_pods.go:61] "nvidia-device-plugin-daemonset-bdlx2" [ae8e59a0-c1bc-4229-a163-f1999243d24f] Running
	I0429 18:42:05.088699   15893 system_pods.go:61] "registry-proxy-fvvc6" [8835c731-1707-4dca-9621-b9f326ad0cd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0429 18:42:05.088704   15893 system_pods.go:61] "registry-vkwz2" [cbb1f320-7afd-403e-96b8-4e34ed9b2d78] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0429 18:42:05.088714   15893 system_pods.go:61] "snapshot-controller-745499f584-gmgpd" [d4da05e7-824f-4178-91fc-a8d9d9f5e065] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 18:42:05.088721   15893 system_pods.go:61] "snapshot-controller-745499f584-wfndt" [fc88fee2-c59d-4e4f-a33c-347f6c34fcbb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 18:42:05.088728   15893 system_pods.go:61] "storage-provisioner" [b4e8e367-62f5-4063-8cd9-523506a10609] Running
	I0429 18:42:05.088733   15893 system_pods.go:61] "tiller-deploy-6677d64bcd-424j5" [d9343705-996d-40f7-9597-aba3801d8af1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0429 18:42:05.088741   15893 system_pods.go:74] duration metric: took 165.050346ms to wait for pod list to return data ...
	I0429 18:42:05.088749   15893 default_sa.go:34] waiting for default service account to be created ...
	I0429 18:42:05.108801   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:05.281951   15893 default_sa.go:45] found service account: "default"
	I0429 18:42:05.281985   15893 default_sa.go:55] duration metric: took 193.227143ms for default service account to be created ...
	I0429 18:42:05.282001   15893 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 18:42:05.321336   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:05.376405   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:05.378396   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:05.488579   15893 system_pods.go:86] 18 kube-system pods found
	I0429 18:42:05.488610   15893 system_pods.go:89] "coredns-7db6d8ff4d-2xt85" [ff070716-6e1d-4ac4-96c7-fa6eb4105594] Running
	I0429 18:42:05.488618   15893 system_pods.go:89] "csi-hostpath-attacher-0" [55526fb3-ae23-4b9e-a7e0-4a8b11e45754] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0429 18:42:05.488625   15893 system_pods.go:89] "csi-hostpath-resizer-0" [489ad110-3b06-480c-96f2-91d6b34e7be8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0429 18:42:05.488632   15893 system_pods.go:89] "csi-hostpathplugin-hgrqx" [2fc787b6-d8f6-4a9d-b816-98ddc0f65eab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0429 18:42:05.488639   15893 system_pods.go:89] "etcd-addons-412183" [8bc479ae-8648-452e-8244-8940efb5b98e] Running
	I0429 18:42:05.488645   15893 system_pods.go:89] "kube-apiserver-addons-412183" [6af7dd3d-3217-488e-96e5-d2597f1eb0e9] Running
	I0429 18:42:05.488652   15893 system_pods.go:89] "kube-controller-manager-addons-412183" [14d64bbb-9a33-4024-8064-8fbb67abc597] Running
	I0429 18:42:05.488659   15893 system_pods.go:89] "kube-ingress-dns-minikube" [3ea4da73-e176-41ea-be8d-a33571308b0c] Running
	I0429 18:42:05.488670   15893 system_pods.go:89] "kube-proxy-xsvwz" [c22033d6-3278-412b-8d58-ae73835285fd] Running
	I0429 18:42:05.488676   15893 system_pods.go:89] "kube-scheduler-addons-412183" [f032228f-858a-4f5a-a47c-9b8cd62a0593] Running
	I0429 18:42:05.488690   15893 system_pods.go:89] "metrics-server-c59844bb4-xbdnx" [0d97597b-550d-4b86-850f-8b839281a545] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 18:42:05.488697   15893 system_pods.go:89] "nvidia-device-plugin-daemonset-bdlx2" [ae8e59a0-c1bc-4229-a163-f1999243d24f] Running
	I0429 18:42:05.488703   15893 system_pods.go:89] "registry-proxy-fvvc6" [8835c731-1707-4dca-9621-b9f326ad0cd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0429 18:42:05.488711   15893 system_pods.go:89] "registry-vkwz2" [cbb1f320-7afd-403e-96b8-4e34ed9b2d78] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0429 18:42:05.488717   15893 system_pods.go:89] "snapshot-controller-745499f584-gmgpd" [d4da05e7-824f-4178-91fc-a8d9d9f5e065] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 18:42:05.488723   15893 system_pods.go:89] "snapshot-controller-745499f584-wfndt" [fc88fee2-c59d-4e4f-a33c-347f6c34fcbb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 18:42:05.488727   15893 system_pods.go:89] "storage-provisioner" [b4e8e367-62f5-4063-8cd9-523506a10609] Running
	I0429 18:42:05.488733   15893 system_pods.go:89] "tiller-deploy-6677d64bcd-424j5" [d9343705-996d-40f7-9597-aba3801d8af1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0429 18:42:05.488740   15893 system_pods.go:126] duration metric: took 206.730841ms to wait for k8s-apps to be running ...
	I0429 18:42:05.488747   15893 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 18:42:05.488799   15893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 18:42:05.531801   15893 system_svc.go:56] duration metric: took 43.04686ms WaitForService to wait for kubelet
	I0429 18:42:05.531843   15893 kubeadm.go:576] duration metric: took 43.809691823s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 18:42:05.531869   15893 node_conditions.go:102] verifying NodePressure condition ...
	I0429 18:42:05.610186   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:05.683860   15893 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 18:42:05.683890   15893 node_conditions.go:123] node cpu capacity is 2
	I0429 18:42:05.683902   15893 node_conditions.go:105] duration metric: took 152.029356ms to run NodePressure ...
	I0429 18:42:05.683914   15893 start.go:240] waiting for startup goroutines ...
	I0429 18:42:05.820619   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:05.875650   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:05.875970   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:06.108856   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:06.321187   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:06.374049   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:06.377237   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:06.615999   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:06.820236   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:06.873988   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:06.876131   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:07.108456   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:07.321011   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:07.373736   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:07.376004   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:07.608641   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:07.819572   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:07.873988   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:07.874790   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:08.113966   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:08.320232   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:08.374410   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:08.379774   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:08.608282   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:08.820036   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:08.873448   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:08.876877   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:09.108499   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:09.320986   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:09.376651   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:09.378992   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:09.609695   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:09.827120   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:09.873491   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:09.879377   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:10.109552   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:10.320707   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:10.373516   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:10.375920   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:10.609434   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:10.821201   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:10.874310   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:10.877710   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:11.110178   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:11.320692   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:11.373412   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:11.378017   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:11.608894   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:11.819844   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:11.874570   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:11.876158   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:12.109961   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:12.320457   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:12.376047   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:12.376684   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:12.611929   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:12.819738   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:12.879350   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:12.883361   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:13.109916   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:13.319233   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:13.380085   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:13.382373   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:13.609784   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:13.820310   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:13.879895   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:13.881343   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:14.109710   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:14.320062   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:14.374056   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:14.375213   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:14.614982   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:14.820562   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:14.879273   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:14.879453   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:15.108951   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:15.326827   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:15.379677   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:15.380083   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:15.610635   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:15.820916   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:15.873979   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:15.878653   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:16.109114   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:16.320650   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:16.373764   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:16.375391   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:16.609808   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:16.820018   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:16.873310   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:16.876372   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:17.109032   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:17.320272   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:17.377505   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:17.378702   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:17.611550   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:17.819699   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:17.880584   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:17.880895   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:18.109239   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:18.323426   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:18.373828   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:18.376453   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:18.613136   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:18.820698   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:18.873344   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:18.879195   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:19.109839   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:19.320098   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:19.373927   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:19.378920   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:19.610004   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:19.820126   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:19.879219   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:19.883046   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:20.109318   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:20.319710   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:20.372647   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:20.376390   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:20.610209   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:21.059879   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:21.061861   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:21.064271   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:21.109434   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:21.320502   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:21.376257   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:21.377651   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:21.609980   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:21.819798   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:21.874184   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:21.878380   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:22.109303   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:22.319667   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:22.373303   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:22.375720   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:22.610607   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:22.819749   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:22.877238   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:22.877439   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:23.110671   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:23.320153   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:23.373551   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:23.376052   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:23.609176   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:23.822500   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:23.878698   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:23.879077   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:24.109071   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:24.320369   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:24.373413   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:24.376775   15893 kapi.go:107] duration metric: took 52.506357023s to wait for kubernetes.io/minikube-addons=registry ...
	I0429 18:42:24.609432   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:24.820180   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:24.874945   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:25.122211   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:25.320237   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:25.374370   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:25.609525   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:25.820344   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:25.874218   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:26.111568   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:26.320003   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:26.373253   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:26.610615   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:26.819944   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:26.874490   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:27.108643   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:27.320232   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:27.373969   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:27.609764   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:27.820535   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:27.875572   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:28.110508   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:28.320137   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:28.373592   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:28.609621   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:28.820797   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:28.874709   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:29.116073   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:29.498339   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:29.499468   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:29.609216   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:29.819885   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:29.872933   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:30.108448   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:30.319844   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:30.373004   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:30.610569   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:30.819214   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:30.873868   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:31.108600   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:31.320264   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:31.373519   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:31.778520   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:31.820133   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:31.873766   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:32.110403   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:32.319923   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:32.374027   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:32.609685   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:32.823506   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:32.875338   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:33.111249   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:33.319314   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:33.374435   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:33.609148   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:33.822750   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:33.874056   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:34.111432   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:34.318855   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:34.372734   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:34.610570   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:34.820089   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:34.873342   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:35.108972   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:35.320253   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:35.373137   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:35.609786   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:35.819792   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:35.875787   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:36.117115   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:36.326164   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:36.376089   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:36.610014   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:36.820407   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:36.874011   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:37.120166   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:37.320500   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:37.373315   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:37.608359   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:37.819557   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:37.875384   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:38.110662   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:38.323731   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:38.373208   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:38.609197   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:38.820787   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:38.873867   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:39.115772   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:39.319494   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:39.374752   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:39.609111   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:39.820207   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:39.873989   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:40.107814   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:40.319942   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:40.373468   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:40.620227   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:40.819905   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:40.874850   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:41.110768   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:41.319429   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:41.373871   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:41.609730   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:41.820835   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:41.880857   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:42.109040   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:42.319592   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:42.375517   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:42.609605   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:42.819188   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:42.874197   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:43.111361   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:43.319058   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:43.376510   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:43.609183   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:43.820823   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:43.883253   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:44.108606   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:44.320270   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:44.382397   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:44.614692   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:44.820344   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:44.883416   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:45.108892   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:45.319041   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:45.375167   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:45.609745   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:45.820541   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:45.874576   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:46.109109   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:46.320787   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:46.374696   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:46.610511   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:46.822449   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:46.875865   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:47.108868   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:47.320464   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:47.374646   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:47.625984   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:47.820037   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:47.874355   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:48.108926   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:48.319709   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:48.374214   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:48.610743   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:48.820625   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:48.874899   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:49.527470   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:49.527528   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:49.529401   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:49.607985   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:49.819750   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:49.876537   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:50.116799   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:50.319118   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:50.373160   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:50.608237   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:50.820588   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:50.874827   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:51.112295   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:51.321371   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:51.377896   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:51.607997   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:51.819291   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:51.875630   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:52.109047   15893 kapi.go:107] duration metric: took 1m19.506656648s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0429 18:42:52.320150   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:52.376103   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:52.821968   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:52.875586   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:53.320455   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:53.374295   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:53.820083   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:53.876342   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:54.321485   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:54.376558   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:54.820417   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:54.875369   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:55.319743   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:55.373135   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:55.819911   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:55.873258   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:56.319772   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:56.373531   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:56.819910   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:56.874423   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:57.319059   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:57.373559   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:57.819961   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:57.874481   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:58.319912   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:58.374001   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:58.819175   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:58.875865   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:59.319206   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:59.374595   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:59.820104   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:59.874755   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:00.319877   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:00.373337   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:00.819582   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:00.875737   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:01.319802   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:01.373031   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:01.819995   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:01.874533   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:02.319260   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:02.373559   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:02.820593   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:02.873851   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:03.319967   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:03.372906   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:03.819082   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:03.878013   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:04.319785   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:04.372707   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:04.820457   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:04.875576   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:05.320043   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:05.373304   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:05.820389   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:05.874968   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:06.319544   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:06.378267   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:06.820628   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:06.873196   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:07.319880   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:07.373113   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:07.819329   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:07.875388   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:08.320573   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:08.373855   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:08.820239   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:08.876090   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:09.319297   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:09.373151   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:09.819637   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:09.874599   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:10.320795   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:10.373131   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:10.819797   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:10.874120   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:11.320656   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:11.374527   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:11.819552   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:11.873767   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:12.319673   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:12.374450   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:12.819904   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:12.873764   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:13.320406   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:13.373778   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:13.820170   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:13.876108   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:14.320520   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:14.374287   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:14.819579   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:14.873493   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:15.320041   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:15.373468   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:15.819939   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:15.876553   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:16.319766   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:16.373308   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:16.820281   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:16.873375   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:17.320850   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:17.373372   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:17.819659   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:17.875307   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:18.321259   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:18.373735   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:18.820405   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:18.874455   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:19.320394   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:19.374212   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:19.819691   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:19.874410   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:20.319316   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:20.374332   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:20.819649   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:20.875092   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:21.320257   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:21.373704   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:21.819803   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:21.875422   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:22.318737   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:22.373045   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:22.819990   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:22.873873   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:23.320145   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:23.373578   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:23.821355   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:23.875818   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:24.321083   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:24.374178   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:24.819215   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:24.875319   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:25.319686   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:25.376024   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:25.819477   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:25.873687   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:26.320543   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:26.374494   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:26.820507   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:26.873407   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:27.323027   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:27.373559   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:27.825718   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:27.874676   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:28.320483   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:28.373696   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:28.820231   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:28.873452   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:29.323207   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:29.373796   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:29.820809   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:29.872751   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:30.320418   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:30.374226   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:30.820193   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:30.874685   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:31.320601   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:31.373914   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:31.820225   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:31.876561   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:32.319638   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:32.375030   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:32.819689   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:32.874627   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:33.319722   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:33.373593   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:33.821274   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:33.873990   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:34.320049   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:34.373046   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:34.819007   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:34.883671   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:35.319871   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:35.373202   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:35.819642   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:35.883366   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:36.324986   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:36.373733   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:36.819605   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:36.873038   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:37.320989   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:37.373455   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:37.819659   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:37.874293   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:38.319416   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:38.373926   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:38.819249   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:38.875393   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:39.320403   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:39.373828   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:39.820073   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:39.873655   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:40.321369   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:40.373639   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:40.819869   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:40.874906   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:41.320417   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:41.374235   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:41.820466   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:41.873696   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:42.319974   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:42.373252   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:42.819928   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:42.873777   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:43.320389   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:43.374188   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:43.819476   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:43.874141   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:44.319462   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:44.374207   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:44.819251   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:44.878598   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:45.319713   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:45.374367   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:45.819515   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:45.876972   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:46.319711   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:46.373544   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:46.820227   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:46.875753   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:47.320069   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:47.373262   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:47.819415   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:47.873611   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:48.319899   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:48.373258   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:48.819697   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:48.875877   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:49.319715   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:49.373138   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:49.819754   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:49.872750   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:50.320174   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:50.375535   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:50.823862   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:50.878775   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:51.322016   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:51.374380   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:51.820918   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:51.876605   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:52.320850   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:52.376116   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:52.820457   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:52.874567   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:53.319610   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:53.376072   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:53.819362   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:53.873783   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:54.320847   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:54.787695   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:54.822470   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:54.875157   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:55.319503   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:55.373812   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:55.820655   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:55.878154   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:56.322038   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:56.372849   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:56.818785   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:56.873434   15893 kapi.go:107] duration metric: took 2m25.008908375s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0429 18:43:57.320310   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:57.820784   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:58.319912   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:58.821453   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:59.321002   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:59.822238   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:44:00.321091   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:44:00.819883   15893 kapi.go:107] duration metric: took 2m26.004241214s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0429 18:44:00.821713   15893 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-412183 cluster.
	I0429 18:44:00.823120   15893 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0429 18:44:00.824635   15893 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0429 18:44:00.826134   15893 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, helm-tiller, inspektor-gadget, yakd, metrics-server, storage-provisioner, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0429 18:44:00.827542   15893 addons.go:505] duration metric: took 2m39.105496165s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns helm-tiller inspektor-gadget yakd metrics-server storage-provisioner default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0429 18:44:00.827601   15893 start.go:245] waiting for cluster config update ...
	I0429 18:44:00.827623   15893 start.go:254] writing updated cluster config ...
	I0429 18:44:00.828011   15893 ssh_runner.go:195] Run: rm -f paused
	I0429 18:44:00.882205   15893 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 18:44:00.883921   15893 out.go:177] * Done! kubectl is now configured to use "addons-412183" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.036383695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714416411036356612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579668,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe3524be-6de7-4bed-9b87-ccdf7a14e195 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.037209227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e25bab2d-d0ac-436b-82b1-44ac105f4b8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.037266671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e25bab2d-d0ac-436b-82b1-44ac105f4b8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.037708529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc8fc0b63ef314b42836b03a887a711953f39d3b92053f68e6fc31c7a287c7b3,PodSandboxId:51d4271a95c5c93f043ffd53993f99a35cd05847e00cf95346b86eb88cd06cb1,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714416404687965737,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-58mmg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28ba31f3-909c-45c3-ba1f-bb5679486b41,},Annotations:map[string]string{io.kubernetes.container.hash: edfe22b7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2d302338a160a0e2c150527899fa208a7976b4eaa8335b15399f4e981686bb,PodSandboxId:139f0e46199808b9891e53c28a9bc5d0efd19b2f447ce7f0338145450a919bb3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714416310243355817,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-58zjw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 37a072c8-8aaf-4735-86a9-4bd44444005d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6782f344,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5148da366d607322c5acf6fedaa54eeec81d5901a47a2c19bf640ea2132d12d7,PodSandboxId:54919015a0b0c60cd9437e254530ddd076856ae9494816f022588350e1090b11,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714416262309136542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: bbcc8ec6-e9cc-473d-8d5e-e5fabf60cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: b29b96b5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658b8decf43f7b00b5234119193d5379dafa508b2458ebc721dcbcdd268dc60,PodSandboxId:42228df064aa40b4efadd0b3002091eae5a8f4ad80fa222c4243bf00a3935213,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714416240403138805,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-g9vlr,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c2859801-1eca-4ac0-9612-7f83c77ac4d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebd5ced,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f130515fc5d16af6b8751e730082a6f6943b5e91191980ebf594ba9df03676af,PodSandboxId:3f24f9f2bdbcf0012efc8ecf64312a80dd46dfd229c9ed5c5ccb1c42275d96d7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1714416161545578525,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-l8tfp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a7a0edd-0b5f-41f6-a04a-7ceb34080013,},Annotations:map[string]string{io.kubernetes.container.hash: 34b89f02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b907cfa2f261fccbbdee1832fe816e0252f80e1f6711ea66f7012b9c68c7c05,PodSandboxId:2d2d7b2e1c2d9417fa6e79286019641c40a461ae21e64061fc452ee797a6b5d6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1714416156104751181,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n9qfp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: feb3094f-df5a-4ca9-809a-0b426837620f,},Annotations:map[string]string{io.kubernetes.container.hash: eec69879,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723a57198ecb6736aa46b90064c6d235583f82f8f570b523eb08d0fc9c53e7,PodSandboxId:2adc934dc9526163c736dae324927ab16d96df4ee16cbeb32cdea6b93a9c0ad8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1714416151898083483,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-5b87k,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 695334d7-ed81-4e1f-8805-0b308e61e51f,},Annotations:map[string]string{io.kubernetes.container.hash: a9e22ea9,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8d385880f89883295a4a6bd71b431cb52dfdbab3f2fc249602b54b0b18a4d9,PodSandboxId:8ccb7691db8a7c30d09ea0aad607c7aa095c9547643edb35abd769fefe06e70b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1714416145971512333,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7cpwq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 734d3fd3-045b-43dd-924f-cd2d77eadbcc,},Annotations:map[string]string{io.kubernetes.container.hash: 20f9d009,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40c213a21bee0d4a0530b8d7edb51ab11bf02b947a1dc38debbe72ba2c3eea16,PodSandboxId:98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714416132204054372,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xbdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d97597b-550d-4b86-850f-8b839281a545,},Annotations:map[string]string{io.kubernetes.container.hash: 8c871209,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6819fcea7b4fad8d8d7adc770f2b04a66dfcf100f35d5fb0f6b52e3f25813d9,PodSandboxId:447c6a6c57fc59a9672fe90628912c8fb80
bee863c9ac746dbb2b04dab7add28,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714416089733328568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e8e367-62f5-4063-8cd9-523506a10609,},Annotations:map[string]string{io.kubernetes.container.hash: a6977079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dd97a03df877cc50b862b3f419eeb59f37a3f2b4bbdf4546bdee290cf25e,PodSandboxId:f21508223bf35ac04d378b218daf78ad13d443b1015b39a
fa352254b001e007f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714416086952837807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2xt85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff070716-6e1d-4ac4-96c7-fa6eb4105594,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac317e9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4a23aee1a21bdea7a870774c664b1a6554a1007827af182017169b776d8cf3c,PodSandboxId:d310d206473956f54252da5c679be0f0455eec0cd467eac8a99d9c56bf39d7db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714416084705178195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsvwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22033d6-3278-412b-8d58-ae73835285fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44a5643,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:8edae0c7e7e7b7865168e4f5d3654e0ac9e8c627d1323178a1618794e43e7b44,PodSandboxId:cb6eff154dc00721778dfa345091adf361662564d0de27891c624788828ec11c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714416062319612387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f0134674e28304dc7ff0337d3566c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:7ddb04a35645e46136c0d21b3330787d487d92ccfbc96de7a34f04aee8385685,PodSandboxId:2b92bcdf43a456f3a5d7d8a9384336aa649a20849ba17b0d1cee589273de4a91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714416062272510162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7f2d54e973228a7084cd2d7f18eb35,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:a2791682e5b0aa0ce3e2020d5d6d2965aef373a33d2fab67a9a1c11ef1f17085,PodSandboxId:bb671963d098e05dbbc03ab8a2039ddfb0fd806f87fc325539c25e3bd2ddcca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714416062286547518,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f06ca88a53653b148fbde08ae3cd69e,},Annotations:map[string]string{io.kubernetes.container.hash: 6f22ab8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:a28762184ca2929c27f2b4bee83875934d812823e05b56c5aab7c46ae6b05b2e,PodSandboxId:643dac2625f91a2b78443fb3f732db640f96bfa8bd66b7fa05e3fc8bc4371606,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714416062189058235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb0226a80bbce0f771b472c76b0984d9,},Annotations:map[string]string{io.kubernetes.container.hash: bf541bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=
e25bab2d-d0ac-436b-82b1-44ac105f4b8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.080316118Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3ea52b4-a3fc-4a30-a109-1a8e1f52c8ce name=/runtime.v1.RuntimeService/Version
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.080395754Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3ea52b4-a3fc-4a30-a109-1a8e1f52c8ce name=/runtime.v1.RuntimeService/Version
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.081744478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=702c0d87-0dc5-4c4d-9504-66267ddc82d3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.083841948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714416411083727606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579668,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=702c0d87-0dc5-4c4d-9504-66267ddc82d3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.084563314Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70f47cdc-1530-43ae-94c9-5cd5dc2946fd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.084626614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70f47cdc-1530-43ae-94c9-5cd5dc2946fd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.085191947Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc8fc0b63ef314b42836b03a887a711953f39d3b92053f68e6fc31c7a287c7b3,PodSandboxId:51d4271a95c5c93f043ffd53993f99a35cd05847e00cf95346b86eb88cd06cb1,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714416404687965737,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-58mmg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28ba31f3-909c-45c3-ba1f-bb5679486b41,},Annotations:map[string]string{io.kubernetes.container.hash: edfe22b7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2d302338a160a0e2c150527899fa208a7976b4eaa8335b15399f4e981686bb,PodSandboxId:139f0e46199808b9891e53c28a9bc5d0efd19b2f447ce7f0338145450a919bb3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714416310243355817,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-58zjw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 37a072c8-8aaf-4735-86a9-4bd44444005d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6782f344,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5148da366d607322c5acf6fedaa54eeec81d5901a47a2c19bf640ea2132d12d7,PodSandboxId:54919015a0b0c60cd9437e254530ddd076856ae9494816f022588350e1090b11,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714416262309136542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: bbcc8ec6-e9cc-473d-8d5e-e5fabf60cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: b29b96b5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658b8decf43f7b00b5234119193d5379dafa508b2458ebc721dcbcdd268dc60,PodSandboxId:42228df064aa40b4efadd0b3002091eae5a8f4ad80fa222c4243bf00a3935213,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714416240403138805,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-g9vlr,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c2859801-1eca-4ac0-9612-7f83c77ac4d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebd5ced,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f130515fc5d16af6b8751e730082a6f6943b5e91191980ebf594ba9df03676af,PodSandboxId:3f24f9f2bdbcf0012efc8ecf64312a80dd46dfd229c9ed5c5ccb1c42275d96d7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1714416161545578525,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-l8tfp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a7a0edd-0b5f-41f6-a04a-7ceb34080013,},Annotations:map[string]string{io.kubernetes.container.hash: 34b89f02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b907cfa2f261fccbbdee1832fe816e0252f80e1f6711ea66f7012b9c68c7c05,PodSandboxId:2d2d7b2e1c2d9417fa6e79286019641c40a461ae21e64061fc452ee797a6b5d6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1714416156104751181,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n9qfp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: feb3094f-df5a-4ca9-809a-0b426837620f,},Annotations:map[string]string{io.kubernetes.container.hash: eec69879,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723a57198ecb6736aa46b90064c6d235583f82f8f570b523eb08d0fc9c53e7,PodSandboxId:2adc934dc9526163c736dae324927ab16d96df4ee16cbeb32cdea6b93a9c0ad8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1714416151898083483,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-5b87k,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 695334d7-ed81-4e1f-8805-0b308e61e51f,},Annotations:map[string]string{io.kubernetes.container.hash: a9e22ea9,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8d385880f89883295a4a6bd71b431cb52dfdbab3f2fc249602b54b0b18a4d9,PodSandboxId:8ccb7691db8a7c30d09ea0aad607c7aa095c9547643edb35abd769fefe06e70b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1714416145971512333,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7cpwq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 734d3fd3-045b-43dd-924f-cd2d77eadbcc,},Annotations:map[string]string{io.kubernetes.container.hash: 20f9d009,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40c213a21bee0d4a0530b8d7edb51ab11bf02b947a1dc38debbe72ba2c3eea16,PodSandboxId:98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714416132204054372,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xbdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d97597b-550d-4b86-850f-8b839281a545,},Annotations:map[string]string{io.kubernetes.container.hash: 8c871209,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6819fcea7b4fad8d8d7adc770f2b04a66dfcf100f35d5fb0f6b52e3f25813d9,PodSandboxId:447c6a6c57fc59a9672fe90628912c8fb80
bee863c9ac746dbb2b04dab7add28,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714416089733328568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e8e367-62f5-4063-8cd9-523506a10609,},Annotations:map[string]string{io.kubernetes.container.hash: a6977079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dd97a03df877cc50b862b3f419eeb59f37a3f2b4bbdf4546bdee290cf25e,PodSandboxId:f21508223bf35ac04d378b218daf78ad13d443b1015b39a
fa352254b001e007f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714416086952837807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2xt85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff070716-6e1d-4ac4-96c7-fa6eb4105594,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac317e9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4a23aee1a21bdea7a870774c664b1a6554a1007827af182017169b776d8cf3c,PodSandboxId:d310d206473956f54252da5c679be0f0455eec0cd467eac8a99d9c56bf39d7db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714416084705178195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsvwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22033d6-3278-412b-8d58-ae73835285fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44a5643,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:8edae0c7e7e7b7865168e4f5d3654e0ac9e8c627d1323178a1618794e43e7b44,PodSandboxId:cb6eff154dc00721778dfa345091adf361662564d0de27891c624788828ec11c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714416062319612387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f0134674e28304dc7ff0337d3566c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:7ddb04a35645e46136c0d21b3330787d487d92ccfbc96de7a34f04aee8385685,PodSandboxId:2b92bcdf43a456f3a5d7d8a9384336aa649a20849ba17b0d1cee589273de4a91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714416062272510162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7f2d54e973228a7084cd2d7f18eb35,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:a2791682e5b0aa0ce3e2020d5d6d2965aef373a33d2fab67a9a1c11ef1f17085,PodSandboxId:bb671963d098e05dbbc03ab8a2039ddfb0fd806f87fc325539c25e3bd2ddcca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714416062286547518,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f06ca88a53653b148fbde08ae3cd69e,},Annotations:map[string]string{io.kubernetes.container.hash: 6f22ab8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:a28762184ca2929c27f2b4bee83875934d812823e05b56c5aab7c46ae6b05b2e,PodSandboxId:643dac2625f91a2b78443fb3f732db640f96bfa8bd66b7fa05e3fc8bc4371606,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714416062189058235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb0226a80bbce0f771b472c76b0984d9,},Annotations:map[string]string{io.kubernetes.container.hash: bf541bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=
70f47cdc-1530-43ae-94c9-5cd5dc2946fd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.125874175Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64dec642-deec-4618-abf7-aff21aeb7836 name=/runtime.v1.RuntimeService/Version
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.125950347Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64dec642-deec-4618-abf7-aff21aeb7836 name=/runtime.v1.RuntimeService/Version
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.127332154Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c65b7811-c08e-4ad9-a698-4af6b5c3ecac name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.128884619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714416411128854139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579668,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c65b7811-c08e-4ad9-a698-4af6b5c3ecac name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.129597107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7364cc00-4a69-473d-ac85-c243860073bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.129653461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7364cc00-4a69-473d-ac85-c243860073bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.130055007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc8fc0b63ef314b42836b03a887a711953f39d3b92053f68e6fc31c7a287c7b3,PodSandboxId:51d4271a95c5c93f043ffd53993f99a35cd05847e00cf95346b86eb88cd06cb1,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714416404687965737,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-58mmg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28ba31f3-909c-45c3-ba1f-bb5679486b41,},Annotations:map[string]string{io.kubernetes.container.hash: edfe22b7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2d302338a160a0e2c150527899fa208a7976b4eaa8335b15399f4e981686bb,PodSandboxId:139f0e46199808b9891e53c28a9bc5d0efd19b2f447ce7f0338145450a919bb3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714416310243355817,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-58zjw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 37a072c8-8aaf-4735-86a9-4bd44444005d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6782f344,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5148da366d607322c5acf6fedaa54eeec81d5901a47a2c19bf640ea2132d12d7,PodSandboxId:54919015a0b0c60cd9437e254530ddd076856ae9494816f022588350e1090b11,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714416262309136542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: bbcc8ec6-e9cc-473d-8d5e-e5fabf60cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: b29b96b5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658b8decf43f7b00b5234119193d5379dafa508b2458ebc721dcbcdd268dc60,PodSandboxId:42228df064aa40b4efadd0b3002091eae5a8f4ad80fa222c4243bf00a3935213,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714416240403138805,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-g9vlr,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c2859801-1eca-4ac0-9612-7f83c77ac4d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebd5ced,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f130515fc5d16af6b8751e730082a6f6943b5e91191980ebf594ba9df03676af,PodSandboxId:3f24f9f2bdbcf0012efc8ecf64312a80dd46dfd229c9ed5c5ccb1c42275d96d7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1714416161545578525,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-l8tfp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a7a0edd-0b5f-41f6-a04a-7ceb34080013,},Annotations:map[string]string{io.kubernetes.container.hash: 34b89f02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b907cfa2f261fccbbdee1832fe816e0252f80e1f6711ea66f7012b9c68c7c05,PodSandboxId:2d2d7b2e1c2d9417fa6e79286019641c40a461ae21e64061fc452ee797a6b5d6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1714416156104751181,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n9qfp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: feb3094f-df5a-4ca9-809a-0b426837620f,},Annotations:map[string]string{io.kubernetes.container.hash: eec69879,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723a57198ecb6736aa46b90064c6d235583f82f8f570b523eb08d0fc9c53e7,PodSandboxId:2adc934dc9526163c736dae324927ab16d96df4ee16cbeb32cdea6b93a9c0ad8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1714416151898083483,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-5b87k,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 695334d7-ed81-4e1f-8805-0b308e61e51f,},Annotations:map[string]string{io.kubernetes.container.hash: a9e22ea9,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8d385880f89883295a4a6bd71b431cb52dfdbab3f2fc249602b54b0b18a4d9,PodSandboxId:8ccb7691db8a7c30d09ea0aad607c7aa095c9547643edb35abd769fefe06e70b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1714416145971512333,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7cpwq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 734d3fd3-045b-43dd-924f-cd2d77eadbcc,},Annotations:map[string]string{io.kubernetes.container.hash: 20f9d009,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40c213a21bee0d4a0530b8d7edb51ab11bf02b947a1dc38debbe72ba2c3eea16,PodSandboxId:98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714416132204054372,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xbdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d97597b-550d-4b86-850f-8b839281a545,},Annotations:map[string]string{io.kubernetes.container.hash: 8c871209,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6819fcea7b4fad8d8d7adc770f2b04a66dfcf100f35d5fb0f6b52e3f25813d9,PodSandboxId:447c6a6c57fc59a9672fe90628912c8fb80
bee863c9ac746dbb2b04dab7add28,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714416089733328568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e8e367-62f5-4063-8cd9-523506a10609,},Annotations:map[string]string{io.kubernetes.container.hash: a6977079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dd97a03df877cc50b862b3f419eeb59f37a3f2b4bbdf4546bdee290cf25e,PodSandboxId:f21508223bf35ac04d378b218daf78ad13d443b1015b39a
fa352254b001e007f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714416086952837807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2xt85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff070716-6e1d-4ac4-96c7-fa6eb4105594,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac317e9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4a23aee1a21bdea7a870774c664b1a6554a1007827af182017169b776d8cf3c,PodSandboxId:d310d206473956f54252da5c679be0f0455eec0cd467eac8a99d9c56bf39d7db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714416084705178195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsvwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22033d6-3278-412b-8d58-ae73835285fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44a5643,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:8edae0c7e7e7b7865168e4f5d3654e0ac9e8c627d1323178a1618794e43e7b44,PodSandboxId:cb6eff154dc00721778dfa345091adf361662564d0de27891c624788828ec11c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714416062319612387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f0134674e28304dc7ff0337d3566c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:7ddb04a35645e46136c0d21b3330787d487d92ccfbc96de7a34f04aee8385685,PodSandboxId:2b92bcdf43a456f3a5d7d8a9384336aa649a20849ba17b0d1cee589273de4a91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714416062272510162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7f2d54e973228a7084cd2d7f18eb35,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:a2791682e5b0aa0ce3e2020d5d6d2965aef373a33d2fab67a9a1c11ef1f17085,PodSandboxId:bb671963d098e05dbbc03ab8a2039ddfb0fd806f87fc325539c25e3bd2ddcca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714416062286547518,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f06ca88a53653b148fbde08ae3cd69e,},Annotations:map[string]string{io.kubernetes.container.hash: 6f22ab8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:a28762184ca2929c27f2b4bee83875934d812823e05b56c5aab7c46ae6b05b2e,PodSandboxId:643dac2625f91a2b78443fb3f732db640f96bfa8bd66b7fa05e3fc8bc4371606,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714416062189058235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb0226a80bbce0f771b472c76b0984d9,},Annotations:map[string]string{io.kubernetes.container.hash: bf541bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=
7364cc00-4a69-473d-ac85-c243860073bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.172570873Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e6d632d-32f4-4a15-9233-303cdcf00baa name=/runtime.v1.RuntimeService/Version
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.172677179Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e6d632d-32f4-4a15-9233-303cdcf00baa name=/runtime.v1.RuntimeService/Version
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.173702178Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=05b10424-db9c-4b6a-90c6-305b13dd461e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.175497307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714416411175324697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579668,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05b10424-db9c-4b6a-90c6-305b13dd461e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.176345213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b961cd75-04b1-4f24-b75d-ffeb77ff444d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.176431491Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b961cd75-04b1-4f24-b75d-ffeb77ff444d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:46:51 addons-412183 crio[688]: time="2024-04-29 18:46:51.177440277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc8fc0b63ef314b42836b03a887a711953f39d3b92053f68e6fc31c7a287c7b3,PodSandboxId:51d4271a95c5c93f043ffd53993f99a35cd05847e00cf95346b86eb88cd06cb1,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714416404687965737,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-58mmg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28ba31f3-909c-45c3-ba1f-bb5679486b41,},Annotations:map[string]string{io.kubernetes.container.hash: edfe22b7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2d302338a160a0e2c150527899fa208a7976b4eaa8335b15399f4e981686bb,PodSandboxId:139f0e46199808b9891e53c28a9bc5d0efd19b2f447ce7f0338145450a919bb3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714416310243355817,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-58zjw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 37a072c8-8aaf-4735-86a9-4bd44444005d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6782f344,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5148da366d607322c5acf6fedaa54eeec81d5901a47a2c19bf640ea2132d12d7,PodSandboxId:54919015a0b0c60cd9437e254530ddd076856ae9494816f022588350e1090b11,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714416262309136542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: bbcc8ec6-e9cc-473d-8d5e-e5fabf60cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: b29b96b5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658b8decf43f7b00b5234119193d5379dafa508b2458ebc721dcbcdd268dc60,PodSandboxId:42228df064aa40b4efadd0b3002091eae5a8f4ad80fa222c4243bf00a3935213,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714416240403138805,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-g9vlr,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c2859801-1eca-4ac0-9612-7f83c77ac4d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebd5ced,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f130515fc5d16af6b8751e730082a6f6943b5e91191980ebf594ba9df03676af,PodSandboxId:3f24f9f2bdbcf0012efc8ecf64312a80dd46dfd229c9ed5c5ccb1c42275d96d7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1714416161545578525,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-l8tfp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a7a0edd-0b5f-41f6-a04a-7ceb34080013,},Annotations:map[string]string{io.kubernetes.container.hash: 34b89f02,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b907cfa2f261fccbbdee1832fe816e0252f80e1f6711ea66f7012b9c68c7c05,PodSandboxId:2d2d7b2e1c2d9417fa6e79286019641c40a461ae21e64061fc452ee797a6b5d6,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1714416156104751181,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n9qfp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: feb3094f-df5a-4ca9-809a-0b426837620f,},Annotations:map[string]string{io.kubernetes.container.hash: eec69879,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723a57198ecb6736aa46b90064c6d235583f82f8f570b523eb08d0fc9c53e7,PodSandboxId:2adc934dc9526163c736dae324927ab16d96df4ee16cbeb32cdea6b93a9c0ad8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1714416151898083483,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-5b87k,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 695334d7-ed81-4e1f-8805-0b308e61e51f,},Annotations:map[string]string{io.kubernetes.container.hash: a9e22ea9,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8d385880f89883295a4a6bd71b431cb52dfdbab3f2fc249602b54b0b18a4d9,PodSandboxId:8ccb7691db8a7c30d09ea0aad607c7aa095c9547643edb35abd769fefe06e70b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1714416145971512333,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7cpwq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 734d3fd3-045b-43dd-924f-cd2d77eadbcc,},Annotations:map[string]string{io.kubernetes.container.hash: 20f9d009,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40c213a21bee0d4a0530b8d7edb51ab11bf02b947a1dc38debbe72ba2c3eea16,PodSandboxId:98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1714416132204054372,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xbdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d97597b-550d-4b86-850f-8b839281a545,},Annotations:map[string]string{io.kubernetes.container.hash: 8c871209,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6819fcea7b4fad8d8d7adc770f2b04a66dfcf100f35d5fb0f6b52e3f25813d9,PodSandboxId:447c6a6c57fc59a9672fe90628912c8fb80
bee863c9ac746dbb2b04dab7add28,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714416089733328568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e8e367-62f5-4063-8cd9-523506a10609,},Annotations:map[string]string{io.kubernetes.container.hash: a6977079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dd97a03df877cc50b862b3f419eeb59f37a3f2b4bbdf4546bdee290cf25e,PodSandboxId:f21508223bf35ac04d378b218daf78ad13d443b1015b39a
fa352254b001e007f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714416086952837807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2xt85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff070716-6e1d-4ac4-96c7-fa6eb4105594,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac317e9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4a23aee1a21bdea7a870774c664b1a6554a1007827af182017169b776d8cf3c,PodSandboxId:d310d206473956f54252da5c679be0f0455eec0cd467eac8a99d9c56bf39d7db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714416084705178195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsvwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22033d6-3278-412b-8d58-ae73835285fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44a5643,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminati
onGracePeriod: 30,},},&Container{Id:8edae0c7e7e7b7865168e4f5d3654e0ac9e8c627d1323178a1618794e43e7b44,PodSandboxId:cb6eff154dc00721778dfa345091adf361662564d0de27891c624788828ec11c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714416062319612387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f0134674e28304dc7ff0337d3566c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:7ddb04a35645e46136c0d21b3330787d487d92ccfbc96de7a34f04aee8385685,PodSandboxId:2b92bcdf43a456f3a5d7d8a9384336aa649a20849ba17b0d1cee589273de4a91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714416062272510162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7f2d54e973228a7084cd2d7f18eb35,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGrace
Period: 30,},},&Container{Id:a2791682e5b0aa0ce3e2020d5d6d2965aef373a33d2fab67a9a1c11ef1f17085,PodSandboxId:bb671963d098e05dbbc03ab8a2039ddfb0fd806f87fc325539c25e3bd2ddcca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714416062286547518,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f06ca88a53653b148fbde08ae3cd69e,},Annotations:map[string]string{io.kubernetes.container.hash: 6f22ab8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&C
ontainer{Id:a28762184ca2929c27f2b4bee83875934d812823e05b56c5aab7c46ae6b05b2e,PodSandboxId:643dac2625f91a2b78443fb3f732db640f96bfa8bd66b7fa05e3fc8bc4371606,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714416062189058235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb0226a80bbce0f771b472c76b0984d9,},Annotations:map[string]string{io.kubernetes.container.hash: bf541bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=
b961cd75-04b1-4f24-b75d-ffeb77ff444d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fc8fc0b63ef31       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      6 seconds ago        Running             hello-world-app           0                   51d4271a95c5c       hello-world-app-86c47465fc-58mmg
	1c2d302338a16       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                        About a minute ago   Running             headlamp                  0                   139f0e4619980       headlamp-7559bf459f-58zjw
	5148da366d607       docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88                              2 minutes ago        Running             nginx                     0                   54919015a0b0c       nginx
	8658b8decf43f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 2 minutes ago        Running             gcp-auth                  0                   42228df064aa4       gcp-auth-5db96cd9b4-g9vlr
	f130515fc5d16       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago        Exited              patch                     0                   3f24f9f2bdbcf       ingress-nginx-admission-patch-l8tfp
	3b907cfa2f261       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago        Exited              create                    0                   2d2d7b2e1c2d9       ingress-nginx-admission-create-n9qfp
	29723a57198ec       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago        Running             yakd                      0                   2adc934dc9526       yakd-dashboard-5ddbf7d777-5b87k
	8c8d385880f89       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago        Running             local-path-provisioner    0                   8ccb7691db8a7       local-path-provisioner-8d985888d-7cpwq
	40c213a21bee0       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago        Running             metrics-server            0                   98b14ddadef48       metrics-server-c59844bb4-xbdnx
	d6819fcea7b4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago        Running             storage-provisioner       0                   447c6a6c57fc5       storage-provisioner
	0127dd97a03df       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago        Running             coredns                   0                   f21508223bf35       coredns-7db6d8ff4d-2xt85
	c4a23aee1a21b       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                             5 minutes ago        Running             kube-proxy                0                   d310d20647395       kube-proxy-xsvwz
	8edae0c7e7e7b       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                             5 minutes ago        Running             kube-controller-manager   0                   cb6eff154dc00       kube-controller-manager-addons-412183
	a2791682e5b0a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                             5 minutes ago        Running             kube-apiserver            0                   bb671963d098e       kube-apiserver-addons-412183
	7ddb04a35645e       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                             5 minutes ago        Running             kube-scheduler            0                   2b92bcdf43a45       kube-scheduler-addons-412183
	a28762184ca29       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago        Running             etcd                      0                   643dac2625f91       etcd-addons-412183
	
	
	==> coredns [0127dd97a03df877cc50b862b3f419eeb59f37a3f2b4bbdf4546bdee290cf25e] <==
	[INFO] 10.244.0.7:40337 - 2282 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000216737s
	[INFO] 10.244.0.7:39363 - 19186 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079375s
	[INFO] 10.244.0.7:39363 - 44788 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128334s
	[INFO] 10.244.0.7:50589 - 3994 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070298s
	[INFO] 10.244.0.7:50589 - 56728 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000168587s
	[INFO] 10.244.0.7:45661 - 52302 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096663s
	[INFO] 10.244.0.7:45661 - 55628 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000171665s
	[INFO] 10.244.0.7:34103 - 56710 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000058336s
	[INFO] 10.244.0.7:34103 - 44165 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00022743s
	[INFO] 10.244.0.7:42542 - 20273 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000043826s
	[INFO] 10.244.0.7:42542 - 45630 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090902s
	[INFO] 10.244.0.7:53550 - 42476 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043078s
	[INFO] 10.244.0.7:53550 - 52946 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000159656s
	[INFO] 10.244.0.7:42370 - 63616 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000043076s
	[INFO] 10.244.0.7:42370 - 38786 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000143879s
	[INFO] 10.244.0.22:46040 - 4687 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000646984s
	[INFO] 10.244.0.22:60142 - 56142 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000902713s
	[INFO] 10.244.0.22:53676 - 17323 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107731s
	[INFO] 10.244.0.22:57045 - 18119 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161242s
	[INFO] 10.244.0.22:38046 - 50695 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120569s
	[INFO] 10.244.0.22:45952 - 10907 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123567s
	[INFO] 10.244.0.22:50073 - 54763 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00379794s
	[INFO] 10.244.0.22:36038 - 33275 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.004189416s
	[INFO] 10.244.0.24:33327 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000462441s
	[INFO] 10.244.0.24:41500 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00019273s
	
	
	==> describe nodes <==
	Name:               addons-412183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-412183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=addons-412183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T18_41_08_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-412183
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 18:41:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-412183
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 18:46:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 18:45:43 +0000   Mon, 29 Apr 2024 18:41:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 18:45:43 +0000   Mon, 29 Apr 2024 18:41:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 18:45:43 +0000   Mon, 29 Apr 2024 18:41:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 18:45:43 +0000   Mon, 29 Apr 2024 18:41:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    addons-412183
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 07b4da41a2a64d4fb0e81387a882105f
	  System UUID:                07b4da41-a2a6-4d4f-b0e8-1387a882105f
	  Boot ID:                    bb7d8d4f-bdf5-45de-b22b-79a308c9af93
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-58mmg          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  gcp-auth                    gcp-auth-5db96cd9b4-g9vlr                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  headlamp                    headlamp-7559bf459f-58zjw                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 coredns-7db6d8ff4d-2xt85                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m30s
	  kube-system                 etcd-addons-412183                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-apiserver-addons-412183              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-controller-manager-addons-412183     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-proxy-xsvwz                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kube-system                 kube-scheduler-addons-412183              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 metrics-server-c59844bb4-xbdnx            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m24s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  local-path-storage          local-path-provisioner-8d985888d-7cpwq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-5b87k           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m22s  kube-proxy       
	  Normal  Starting                 5m44s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m44s  kubelet          Node addons-412183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m44s  kubelet          Node addons-412183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m44s  kubelet          Node addons-412183 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m44s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m43s  kubelet          Node addons-412183 status is now: NodeReady
	  Normal  RegisteredNode           5m31s  node-controller  Node addons-412183 event: Registered Node addons-412183 in Controller
	
	
	==> dmesg <==
	[  +0.160730] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.025683] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.260760] kauditd_printk_skb: 131 callbacks suppressed
	[  +5.960768] kauditd_printk_skb: 106 callbacks suppressed
	[ +13.789273] kauditd_printk_skb: 5 callbacks suppressed
	[Apr29 18:42] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.966614] kauditd_printk_skb: 4 callbacks suppressed
	[ +22.429210] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.033357] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.853048] kauditd_printk_skb: 58 callbacks suppressed
	[Apr29 18:43] kauditd_printk_skb: 2 callbacks suppressed
	[ +14.654654] kauditd_printk_skb: 24 callbacks suppressed
	[ +30.757856] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.624438] kauditd_printk_skb: 15 callbacks suppressed
	[Apr29 18:44] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.343314] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.779507] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.580158] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.793348] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.496820] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.910672] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.038568] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.011140] kauditd_printk_skb: 11 callbacks suppressed
	[Apr29 18:45] kauditd_printk_skb: 26 callbacks suppressed
	[Apr29 18:46] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [a28762184ca2929c27f2b4bee83875934d812823e05b56c5aab7c46ae6b05b2e] <==
	{"level":"warn","ts":"2024-04-29T18:42:49.515583Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:42:49.09942Z","time spent":"416.159281ms","remote":"127.0.0.1:51948","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85576,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-04-29T18:42:49.515734Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.37462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-04-29T18:42:49.51587Z","caller":"traceutil/trace.go:171","msg":"trace[285637063] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1119; }","duration":"149.581537ms","start":"2024-04-29T18:42:49.366278Z","end":"2024-04-29T18:42:49.51586Z","steps":["trace[285637063] 'agreement among raft nodes before linearized reading'  (duration: 149.407347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:42:49.516018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.03821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-04-29T18:42:49.516071Z","caller":"traceutil/trace.go:171","msg":"trace[642312019] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1119; }","duration":"203.115082ms","start":"2024-04-29T18:42:49.312948Z","end":"2024-04-29T18:42:49.516063Z","steps":["trace[642312019] 'agreement among raft nodes before linearized reading'  (duration: 203.01565ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T18:43:54.772367Z","caller":"traceutil/trace.go:171","msg":"trace[152321535] transaction","detail":"{read_only:false; response_revision:1252; number_of_response:1; }","duration":"455.384004ms","start":"2024-04-29T18:43:54.316956Z","end":"2024-04-29T18:43:54.77234Z","steps":["trace[152321535] 'process raft request'  (duration: 455.026582ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:43:54.772573Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:43:54.316943Z","time spent":"455.561115ms","remote":"127.0.0.1:51926","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1250 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-29T18:43:54.773109Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"410.838063ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-04-29T18:43:54.773176Z","caller":"traceutil/trace.go:171","msg":"trace[884231050] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1252; }","duration":"410.930409ms","start":"2024-04-29T18:43:54.362239Z","end":"2024-04-29T18:43:54.773169Z","steps":["trace[884231050] 'agreement among raft nodes before linearized reading'  (duration: 410.781646ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:43:54.773225Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:43:54.362226Z","time spent":"410.992978ms","remote":"127.0.0.1:51948","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14386,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-04-29T18:43:54.772955Z","caller":"traceutil/trace.go:171","msg":"trace[244138109] linearizableReadLoop","detail":"{readStateIndex:1304; appliedIndex:1303; }","duration":"409.842008ms","start":"2024-04-29T18:43:54.362263Z","end":"2024-04-29T18:43:54.772105Z","steps":["trace[244138109] 'read index received'  (duration: 409.665865ms)","trace[244138109] 'applied index is now lower than readState.Index'  (duration: 175.629µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T18:43:54.773647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.586725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-04-29T18:43:54.773699Z","caller":"traceutil/trace.go:171","msg":"trace[964049683] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1252; }","duration":"104.670886ms","start":"2024-04-29T18:43:54.66902Z","end":"2024-04-29T18:43:54.773691Z","steps":["trace[964049683] 'agreement among raft nodes before linearized reading'  (duration: 104.552099ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T18:44:00.148956Z","caller":"traceutil/trace.go:171","msg":"trace[1072383503] linearizableReadLoop","detail":"{readStateIndex:1327; appliedIndex:1326; }","duration":"227.5973ms","start":"2024-04-29T18:43:59.921346Z","end":"2024-04-29T18:44:00.148943Z","steps":["trace[1072383503] 'read index received'  (duration: 227.376784ms)","trace[1072383503] 'applied index is now lower than readState.Index'  (duration: 220.115µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T18:44:00.149258Z","caller":"traceutil/trace.go:171","msg":"trace[790922415] transaction","detail":"{read_only:false; response_revision:1274; number_of_response:1; }","duration":"337.397027ms","start":"2024-04-29T18:43:59.81185Z","end":"2024-04-29T18:44:00.149247Z","steps":["trace[790922415] 'process raft request'  (duration: 336.993803ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:44:00.149424Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:43:59.811748Z","time spent":"337.61764ms","remote":"127.0.0.1:52042","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1254 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-04-29T18:44:00.149215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.87654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T18:44:00.149611Z","caller":"traceutil/trace.go:171","msg":"trace[1228796822] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1274; }","duration":"228.282757ms","start":"2024-04-29T18:43:59.921317Z","end":"2024-04-29T18:44:00.1496Z","steps":["trace[1228796822] 'agreement among raft nodes before linearized reading'  (duration: 227.844234ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T18:44:12.898724Z","caller":"traceutil/trace.go:171","msg":"trace[160721282] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1351; }","duration":"344.314789ms","start":"2024-04-29T18:44:12.554323Z","end":"2024-04-29T18:44:12.898638Z","steps":["trace[160721282] 'process raft request'  (duration: 344.100167ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:44:12.89913Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:44:12.554301Z","time spent":"344.585178ms","remote":"127.0.0.1:51840","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":57,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-4fdlj.17cad4596fac2a19\" mod_revision:609 > success:<request_delete_range:<key:\"/registry/events/gadget/gadget-4fdlj.17cad4596fac2a19\" > > failure:<request_range:<key:\"/registry/events/gadget/gadget-4fdlj.17cad4596fac2a19\" > >"}
	{"level":"warn","ts":"2024-04-29T18:44:22.154995Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.530619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T18:44:22.155146Z","caller":"traceutil/trace.go:171","msg":"trace[481488012] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1442; }","duration":"287.749967ms","start":"2024-04-29T18:44:21.867377Z","end":"2024-04-29T18:44:22.155127Z","steps":["trace[481488012] 'agreement among raft nodes before linearized reading'  (duration: 287.514228ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T18:44:22.155303Z","caller":"traceutil/trace.go:171","msg":"trace[31018762] linearizableReadLoop","detail":"{readStateIndex:1501; appliedIndex:1500; }","duration":"287.258903ms","start":"2024-04-29T18:44:21.867402Z","end":"2024-04-29T18:44:22.154661Z","steps":["trace[31018762] 'read index received'  (duration: 282.154297ms)","trace[31018762] 'applied index is now lower than readState.Index'  (duration: 5.103555ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T18:44:22.156952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.88133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:16 size:78629"}
	{"level":"info","ts":"2024-04-29T18:44:22.157016Z","caller":"traceutil/trace.go:171","msg":"trace[637640349] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:16; response_revision:1442; }","duration":"249.980875ms","start":"2024-04-29T18:44:21.907027Z","end":"2024-04-29T18:44:22.157007Z","steps":["trace[637640349] 'agreement among raft nodes before linearized reading'  (duration: 249.590534ms)"],"step_count":1}
	
	
	==> gcp-auth [8658b8decf43f7b00b5234119193d5379dafa508b2458ebc721dcbcdd268dc60] <==
	2024/04/29 18:44:06 Ready to write response ...
	2024/04/29 18:44:11 Ready to marshal response ...
	2024/04/29 18:44:11 Ready to write response ...
	2024/04/29 18:44:14 Ready to marshal response ...
	2024/04/29 18:44:14 Ready to write response ...
	2024/04/29 18:44:25 Ready to marshal response ...
	2024/04/29 18:44:25 Ready to write response ...
	2024/04/29 18:44:32 Ready to marshal response ...
	2024/04/29 18:44:32 Ready to write response ...
	2024/04/29 18:44:36 Ready to marshal response ...
	2024/04/29 18:44:36 Ready to write response ...
	2024/04/29 18:44:36 Ready to marshal response ...
	2024/04/29 18:44:36 Ready to write response ...
	2024/04/29 18:44:39 Ready to marshal response ...
	2024/04/29 18:44:39 Ready to write response ...
	2024/04/29 18:44:49 Ready to marshal response ...
	2024/04/29 18:44:49 Ready to write response ...
	2024/04/29 18:45:04 Ready to marshal response ...
	2024/04/29 18:45:04 Ready to write response ...
	2024/04/29 18:45:04 Ready to marshal response ...
	2024/04/29 18:45:04 Ready to write response ...
	2024/04/29 18:45:04 Ready to marshal response ...
	2024/04/29 18:45:04 Ready to write response ...
	2024/04/29 18:46:40 Ready to marshal response ...
	2024/04/29 18:46:40 Ready to write response ...
	
	
	==> kernel <==
	 18:46:51 up 6 min,  0 users,  load average: 0.44, 0.89, 0.50
	Linux addons-412183 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a2791682e5b0aa0ce3e2020d5d6d2965aef373a33d2fab67a9a1c11ef1f17085] <==
	I0429 18:44:07.493367       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0429 18:44:08.523930       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0429 18:44:13.036454       1 trace.go:236] Trace[221412242]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:3f2ed355-877d-4537-9a65-eb696c18492b,client:192.168.39.105,api-group:,api-version:v1,name:,subresource:,namespace:gadget,protocol:HTTP/2.0,resource:events,scope:namespace,url:/api/v1/namespaces/gadget/events,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:namespace-controller,verb:DELETE (29-Apr-2024 18:44:12.536) (total time: 500ms):
	Trace[221412242]: ---"About to write a response" 494ms (18:44:13.036)
	Trace[221412242]: [500.010231ms] [500.010231ms] END
	I0429 18:44:14.003849       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0429 18:44:14.211078       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.151.215"}
	I0429 18:44:20.326080       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0429 18:44:30.113279       1 conn.go:339] Error on socket receive: read tcp 192.168.39.105:8443->192.168.39.1:37726: use of closed network connection
	E0429 18:44:30.257674       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.105:8443->10.244.0.26:46248: read: connection reset by peer
	I0429 18:44:56.182086       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 18:44:56.182162       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 18:44:56.202251       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 18:44:56.202371       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 18:44:56.222450       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 18:44:56.222549       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 18:44:56.230461       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 18:44:56.231024       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 18:44:56.256500       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 18:44:56.256693       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0429 18:44:57.222643       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0429 18:44:57.257300       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0429 18:44:57.274622       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0429 18:45:04.166929       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.83.62"}
	I0429 18:46:40.723494       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.150.197"}
	
	
	==> kube-controller-manager [8edae0c7e7e7b7865168e4f5d3654e0ac9e8c627d1323178a1618794e43e7b44] <==
	W0429 18:45:31.486454       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:45:31.486486       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:45:31.617570       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:45:31.617655       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:45:33.398714       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:45:33.398842       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:45:59.796209       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:45:59.796275       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:46:09.942146       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:46:09.942254       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:46:16.185154       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:46:16.185364       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:46:19.668071       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:46:19.668234       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 18:46:40.544980       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="48.582411ms"
	I0429 18:46:40.566691       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="17.671054ms"
	I0429 18:46:40.567083       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="335.032µs"
	I0429 18:46:40.582167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="242.112µs"
	I0429 18:46:43.240953       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0429 18:46:43.246448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="10.5µs"
	I0429 18:46:43.260733       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	W0429 18:46:45.249843       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:46:45.249901       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 18:46:45.362179       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="11.654578ms"
	I0429 18:46:45.362348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="51.31µs"
	
	
	==> kube-proxy [c4a23aee1a21bdea7a870774c664b1a6554a1007827af182017169b776d8cf3c] <==
	I0429 18:41:27.500198       1 server_linux.go:69] "Using iptables proxy"
	I0429 18:41:27.636947       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.105"]
	I0429 18:41:28.897531       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 18:41:28.898101       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 18:41:28.898148       1 server_linux.go:165] "Using iptables Proxier"
	I0429 18:41:28.971476       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 18:41:28.971652       1 server.go:872] "Version info" version="v1.30.0"
	I0429 18:41:28.971665       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 18:41:28.981078       1 config.go:192] "Starting service config controller"
	I0429 18:41:28.981092       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 18:41:28.981119       1 config.go:101] "Starting endpoint slice config controller"
	I0429 18:41:28.981123       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 18:41:28.981580       1 config.go:319] "Starting node config controller"
	I0429 18:41:28.981587       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 18:41:29.081941       1 shared_informer.go:320] Caches are synced for node config
	I0429 18:41:29.081966       1 shared_informer.go:320] Caches are synced for service config
	I0429 18:41:29.081993       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7ddb04a35645e46136c0d21b3330787d487d92ccfbc96de7a34f04aee8385685] <==
	W0429 18:41:05.951977       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 18:41:05.952089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 18:41:06.031233       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 18:41:06.031734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 18:41:06.160065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 18:41:06.160204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 18:41:06.188159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 18:41:06.188246       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 18:41:06.189240       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 18:41:06.189349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 18:41:06.221036       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 18:41:06.221172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 18:41:06.221341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 18:41:06.221410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 18:41:06.232294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 18:41:06.232484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 18:41:06.245830       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 18:41:06.245986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 18:41:06.382504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 18:41:06.382820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 18:41:06.418162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 18:41:06.418957       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 18:41:06.592577       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 18:41:06.592632       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0429 18:41:09.616406       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 18:46:07 addons-412183 kubelet[1289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 18:46:40 addons-412183 kubelet[1289]: I0429 18:46:40.547093    1289 topology_manager.go:215] "Topology Admit Handler" podUID="28ba31f3-909c-45c3-ba1f-bb5679486b41" podNamespace="default" podName="hello-world-app-86c47465fc-58mmg"
	Apr 29 18:46:40 addons-412183 kubelet[1289]: I0429 18:46:40.596994    1289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg4tv\" (UniqueName: \"kubernetes.io/projected/28ba31f3-909c-45c3-ba1f-bb5679486b41-kube-api-access-qg4tv\") pod \"hello-world-app-86c47465fc-58mmg\" (UID: \"28ba31f3-909c-45c3-ba1f-bb5679486b41\") " pod="default/hello-world-app-86c47465fc-58mmg"
	Apr 29 18:46:40 addons-412183 kubelet[1289]: I0429 18:46:40.597076    1289 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/28ba31f3-909c-45c3-ba1f-bb5679486b41-gcp-creds\") pod \"hello-world-app-86c47465fc-58mmg\" (UID: \"28ba31f3-909c-45c3-ba1f-bb5679486b41\") " pod="default/hello-world-app-86c47465fc-58mmg"
	Apr 29 18:46:41 addons-412183 kubelet[1289]: I0429 18:46:41.908569    1289 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzrnr\" (UniqueName: \"kubernetes.io/projected/3ea4da73-e176-41ea-be8d-a33571308b0c-kube-api-access-dzrnr\") pod \"3ea4da73-e176-41ea-be8d-a33571308b0c\" (UID: \"3ea4da73-e176-41ea-be8d-a33571308b0c\") "
	Apr 29 18:46:41 addons-412183 kubelet[1289]: I0429 18:46:41.913426    1289 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea4da73-e176-41ea-be8d-a33571308b0c-kube-api-access-dzrnr" (OuterVolumeSpecName: "kube-api-access-dzrnr") pod "3ea4da73-e176-41ea-be8d-a33571308b0c" (UID: "3ea4da73-e176-41ea-be8d-a33571308b0c"). InnerVolumeSpecName "kube-api-access-dzrnr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 18:46:42 addons-412183 kubelet[1289]: I0429 18:46:42.009610    1289 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dzrnr\" (UniqueName: \"kubernetes.io/projected/3ea4da73-e176-41ea-be8d-a33571308b0c-kube-api-access-dzrnr\") on node \"addons-412183\" DevicePath \"\""
	Apr 29 18:46:42 addons-412183 kubelet[1289]: I0429 18:46:42.233372    1289 scope.go:117] "RemoveContainer" containerID="63b60f0e2b8330922eaf868d9be199e07c15499b0e1f59891cb2cb2da9f87840"
	Apr 29 18:46:42 addons-412183 kubelet[1289]: I0429 18:46:42.296248    1289 scope.go:117] "RemoveContainer" containerID="63b60f0e2b8330922eaf868d9be199e07c15499b0e1f59891cb2cb2da9f87840"
	Apr 29 18:46:42 addons-412183 kubelet[1289]: E0429 18:46:42.296990    1289 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63b60f0e2b8330922eaf868d9be199e07c15499b0e1f59891cb2cb2da9f87840\": container with ID starting with 63b60f0e2b8330922eaf868d9be199e07c15499b0e1f59891cb2cb2da9f87840 not found: ID does not exist" containerID="63b60f0e2b8330922eaf868d9be199e07c15499b0e1f59891cb2cb2da9f87840"
	Apr 29 18:46:42 addons-412183 kubelet[1289]: I0429 18:46:42.297122    1289 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63b60f0e2b8330922eaf868d9be199e07c15499b0e1f59891cb2cb2da9f87840"} err="failed to get container status \"63b60f0e2b8330922eaf868d9be199e07c15499b0e1f59891cb2cb2da9f87840\": rpc error: code = NotFound desc = could not find container \"63b60f0e2b8330922eaf868d9be199e07c15499b0e1f59891cb2cb2da9f87840\": container with ID starting with 63b60f0e2b8330922eaf868d9be199e07c15499b0e1f59891cb2cb2da9f87840 not found: ID does not exist"
	Apr 29 18:46:43 addons-412183 kubelet[1289]: I0429 18:46:43.677192    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea4da73-e176-41ea-be8d-a33571308b0c" path="/var/lib/kubelet/pods/3ea4da73-e176-41ea-be8d-a33571308b0c/volumes"
	Apr 29 18:46:43 addons-412183 kubelet[1289]: I0429 18:46:43.677634    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a7a0edd-0b5f-41f6-a04a-7ceb34080013" path="/var/lib/kubelet/pods/4a7a0edd-0b5f-41f6-a04a-7ceb34080013/volumes"
	Apr 29 18:46:43 addons-412183 kubelet[1289]: I0429 18:46:43.678130    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb3094f-df5a-4ca9-809a-0b426837620f" path="/var/lib/kubelet/pods/feb3094f-df5a-4ca9-809a-0b426837620f/volumes"
	Apr 29 18:46:46 addons-412183 kubelet[1289]: I0429 18:46:46.552035    1289 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7-webhook-cert\") pod \"16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7\" (UID: \"16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7\") "
	Apr 29 18:46:46 addons-412183 kubelet[1289]: I0429 18:46:46.552117    1289 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc6zp\" (UniqueName: \"kubernetes.io/projected/16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7-kube-api-access-wc6zp\") pod \"16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7\" (UID: \"16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7\") "
	Apr 29 18:46:46 addons-412183 kubelet[1289]: I0429 18:46:46.554947    1289 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7" (UID: "16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 29 18:46:46 addons-412183 kubelet[1289]: I0429 18:46:46.556669    1289 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7-kube-api-access-wc6zp" (OuterVolumeSpecName: "kube-api-access-wc6zp") pod "16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7" (UID: "16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7"). InnerVolumeSpecName "kube-api-access-wc6zp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 18:46:46 addons-412183 kubelet[1289]: I0429 18:46:46.652573    1289 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wc6zp\" (UniqueName: \"kubernetes.io/projected/16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7-kube-api-access-wc6zp\") on node \"addons-412183\" DevicePath \"\""
	Apr 29 18:46:46 addons-412183 kubelet[1289]: I0429 18:46:46.652614    1289 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7-webhook-cert\") on node \"addons-412183\" DevicePath \"\""
	Apr 29 18:46:47 addons-412183 kubelet[1289]: I0429 18:46:47.353840    1289 scope.go:117] "RemoveContainer" containerID="fe3321c8818d970f2b5b9fdb2982e0e687fc7fe7793d7953e64a87841c115d82"
	Apr 29 18:46:47 addons-412183 kubelet[1289]: I0429 18:46:47.380055    1289 scope.go:117] "RemoveContainer" containerID="fe3321c8818d970f2b5b9fdb2982e0e687fc7fe7793d7953e64a87841c115d82"
	Apr 29 18:46:47 addons-412183 kubelet[1289]: E0429 18:46:47.380850    1289 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3321c8818d970f2b5b9fdb2982e0e687fc7fe7793d7953e64a87841c115d82\": container with ID starting with fe3321c8818d970f2b5b9fdb2982e0e687fc7fe7793d7953e64a87841c115d82 not found: ID does not exist" containerID="fe3321c8818d970f2b5b9fdb2982e0e687fc7fe7793d7953e64a87841c115d82"
	Apr 29 18:46:47 addons-412183 kubelet[1289]: I0429 18:46:47.380916    1289 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3321c8818d970f2b5b9fdb2982e0e687fc7fe7793d7953e64a87841c115d82"} err="failed to get container status \"fe3321c8818d970f2b5b9fdb2982e0e687fc7fe7793d7953e64a87841c115d82\": rpc error: code = NotFound desc = could not find container \"fe3321c8818d970f2b5b9fdb2982e0e687fc7fe7793d7953e64a87841c115d82\": container with ID starting with fe3321c8818d970f2b5b9fdb2982e0e687fc7fe7793d7953e64a87841c115d82 not found: ID does not exist"
	Apr 29 18:46:47 addons-412183 kubelet[1289]: I0429 18:46:47.677222    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7" path="/var/lib/kubelet/pods/16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7/volumes"
	
	
	==> storage-provisioner [d6819fcea7b4fad8d8d7adc770f2b04a66dfcf100f35d5fb0f6b52e3f25813d9] <==
	I0429 18:41:31.610359       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 18:41:31.653680       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 18:41:31.653746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 18:41:31.719068       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 18:41:31.719257       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-412183_00060d43-6ab8-4be3-a0c0-3eff8a05ce05!
	I0429 18:41:31.755464       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"71759172-0ccc-47bf-b198-5b2da54db950", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-412183_00060d43-6ab8-4be3-a0c0-3eff8a05ce05 became leader
	I0429 18:41:31.920009       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-412183_00060d43-6ab8-4be3-a0c0-3eff8a05ce05!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-412183 -n addons-412183
helpers_test.go:261: (dbg) Run:  kubectl --context addons-412183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (158.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (337.59s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 29.069923ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-xbdnx" [0d97597b-550d-4b86-850f-8b839281a545] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006870052s
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (68.879153ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 2m44.997749239s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (71.623436ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 2m47.420043838s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (79.759755ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 2m50.991805233s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (77.748664ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 2m56.409669127s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (68.687603ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 3m9.800584807s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (68.97957ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 3m17.718467415s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (64.650727ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 3m39.028070322s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (69.999717ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 3m57.844329988s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (65.620013ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 4m28.652976729s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (70.957717ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 5m20.379576859s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (64.294333ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 6m41.870269445s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (69.758266ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 7m42.72749998s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-412183 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-412183 top pods -n kube-system: exit status 1 (63.550011ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-2xt85, age: 8m14.400954897s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-412183 -n addons-412183
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-412183 logs -n 25: (1.594763314s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| delete  | -p download-only-450771                                                                     | download-only-450771 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| delete  | -p download-only-513783                                                                     | download-only-513783 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| delete  | -p download-only-450771                                                                     | download-only-450771 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-527606 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |                     |
	|         | binary-mirror-527606                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33939                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-527606                                                                     | binary-mirror-527606 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| addons  | disable dashboard -p                                                                        | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |                     |
	|         | addons-412183                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |                     |
	|         | addons-412183                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-412183 --wait=true                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:44 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | addons-412183                                                                               |                      |         |         |                     |                     |
	| ip      | addons-412183 ip                                                                            | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	| addons  | addons-412183 addons disable                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-412183 ssh curl -s                                                                   | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-412183 addons disable                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-412183 addons                                                                        | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-412183 ssh cat                                                                       | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | /opt/local-path-provisioner/pvc-44e4f926-cc71-46f4-8659-1c0700bd3215_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-412183 addons disable                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-412183 addons                                                                        | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:44 UTC | 29 Apr 24 18:44 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:45 UTC | 29 Apr 24 18:45 UTC |
	|         | -p addons-412183                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:45 UTC | 29 Apr 24 18:45 UTC |
	|         | addons-412183                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:45 UTC | 29 Apr 24 18:45 UTC |
	|         | -p addons-412183                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-412183 ip                                                                            | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:46 UTC | 29 Apr 24 18:46 UTC |
	| addons  | addons-412183 addons disable                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:46 UTC | 29 Apr 24 18:46 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-412183 addons disable                                                                | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:46 UTC | 29 Apr 24 18:46 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-412183 addons                                                                        | addons-412183        | jenkins | v1.33.0 | 29 Apr 24 18:49 UTC | 29 Apr 24 18:49 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 18:40:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 18:40:21.784513   15893 out.go:291] Setting OutFile to fd 1 ...
	I0429 18:40:21.784759   15893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:40:21.784768   15893 out.go:304] Setting ErrFile to fd 2...
	I0429 18:40:21.784773   15893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:40:21.784961   15893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 18:40:21.785683   15893 out.go:298] Setting JSON to false
	I0429 18:40:21.786597   15893 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1320,"bootTime":1714414702,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 18:40:21.786667   15893 start.go:139] virtualization: kvm guest
	I0429 18:40:21.788842   15893 out.go:177] * [addons-412183] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 18:40:21.791385   15893 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 18:40:21.792814   15893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 18:40:21.791429   15893 notify.go:220] Checking for updates...
	I0429 18:40:21.795404   15893 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:40:21.796729   15893 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:40:21.798048   15893 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 18:40:21.799373   15893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 18:40:21.800728   15893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 18:40:21.831646   15893 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 18:40:21.832742   15893 start.go:297] selected driver: kvm2
	I0429 18:40:21.832755   15893 start.go:901] validating driver "kvm2" against <nil>
	I0429 18:40:21.832766   15893 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 18:40:21.833473   15893 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:40:21.833550   15893 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 18:40:21.847903   15893 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 18:40:21.847958   15893 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 18:40:21.848156   15893 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 18:40:21.848215   15893 cni.go:84] Creating CNI manager for ""
	I0429 18:40:21.848231   15893 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 18:40:21.848238   15893 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 18:40:21.848291   15893 start.go:340] cluster config:
	{Name:addons-412183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-412183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:40:21.848390   15893 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:40:21.850108   15893 out.go:177] * Starting "addons-412183" primary control-plane node in "addons-412183" cluster
	I0429 18:40:21.851536   15893 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 18:40:21.851573   15893 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 18:40:21.851595   15893 cache.go:56] Caching tarball of preloaded images
	I0429 18:40:21.851675   15893 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 18:40:21.851689   15893 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 18:40:21.852090   15893 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/config.json ...
	I0429 18:40:21.852125   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/config.json: {Name:mk0047e96bc96b9616a4f565ad62819443d7eb7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:21.852265   15893 start.go:360] acquireMachinesLock for addons-412183: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 18:40:21.852331   15893 start.go:364] duration metric: took 42.562µs to acquireMachinesLock for "addons-412183"
	I0429 18:40:21.852355   15893 start.go:93] Provisioning new machine with config: &{Name:addons-412183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-412183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 18:40:21.852415   15893 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 18:40:21.854089   15893 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0429 18:40:21.854216   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:40:21.854263   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:40:21.868333   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0429 18:40:21.868840   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:40:21.869365   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:40:21.869388   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:40:21.869696   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:40:21.869879   15893 main.go:141] libmachine: (addons-412183) Calling .GetMachineName
	I0429 18:40:21.870020   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:21.870171   15893 start.go:159] libmachine.API.Create for "addons-412183" (driver="kvm2")
	I0429 18:40:21.870201   15893 client.go:168] LocalClient.Create starting
	I0429 18:40:21.870236   15893 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 18:40:21.936161   15893 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 18:40:22.153075   15893 main.go:141] libmachine: Running pre-create checks...
	I0429 18:40:22.153101   15893 main.go:141] libmachine: (addons-412183) Calling .PreCreateCheck
	I0429 18:40:22.153643   15893 main.go:141] libmachine: (addons-412183) Calling .GetConfigRaw
	I0429 18:40:22.154052   15893 main.go:141] libmachine: Creating machine...
	I0429 18:40:22.154091   15893 main.go:141] libmachine: (addons-412183) Calling .Create
	I0429 18:40:22.154231   15893 main.go:141] libmachine: (addons-412183) Creating KVM machine...
	I0429 18:40:22.155517   15893 main.go:141] libmachine: (addons-412183) DBG | found existing default KVM network
	I0429 18:40:22.156390   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:22.156242   15915 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0429 18:40:22.156478   15893 main.go:141] libmachine: (addons-412183) DBG | created network xml: 
	I0429 18:40:22.156501   15893 main.go:141] libmachine: (addons-412183) DBG | <network>
	I0429 18:40:22.156513   15893 main.go:141] libmachine: (addons-412183) DBG |   <name>mk-addons-412183</name>
	I0429 18:40:22.156523   15893 main.go:141] libmachine: (addons-412183) DBG |   <dns enable='no'/>
	I0429 18:40:22.156532   15893 main.go:141] libmachine: (addons-412183) DBG |   
	I0429 18:40:22.156546   15893 main.go:141] libmachine: (addons-412183) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 18:40:22.156558   15893 main.go:141] libmachine: (addons-412183) DBG |     <dhcp>
	I0429 18:40:22.156570   15893 main.go:141] libmachine: (addons-412183) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 18:40:22.156580   15893 main.go:141] libmachine: (addons-412183) DBG |     </dhcp>
	I0429 18:40:22.156590   15893 main.go:141] libmachine: (addons-412183) DBG |   </ip>
	I0429 18:40:22.156603   15893 main.go:141] libmachine: (addons-412183) DBG |   
	I0429 18:40:22.156615   15893 main.go:141] libmachine: (addons-412183) DBG | </network>
	I0429 18:40:22.156623   15893 main.go:141] libmachine: (addons-412183) DBG | 
	I0429 18:40:22.161890   15893 main.go:141] libmachine: (addons-412183) DBG | trying to create private KVM network mk-addons-412183 192.168.39.0/24...
	I0429 18:40:22.226644   15893 main.go:141] libmachine: (addons-412183) DBG | private KVM network mk-addons-412183 192.168.39.0/24 created
	I0429 18:40:22.226670   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:22.226630   15915 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:40:22.226695   15893 main.go:141] libmachine: (addons-412183) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183 ...
	I0429 18:40:22.226715   15893 main.go:141] libmachine: (addons-412183) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 18:40:22.226778   15893 main.go:141] libmachine: (addons-412183) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 18:40:22.474365   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:22.474272   15915 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa...
	I0429 18:40:22.848313   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:22.848167   15915 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/addons-412183.rawdisk...
	I0429 18:40:22.848342   15893 main.go:141] libmachine: (addons-412183) DBG | Writing magic tar header
	I0429 18:40:22.848352   15893 main.go:141] libmachine: (addons-412183) DBG | Writing SSH key tar header
	I0429 18:40:22.848360   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:22.848277   15915 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183 ...
	I0429 18:40:22.848370   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183
	I0429 18:40:22.848380   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 18:40:22.848388   15893 main.go:141] libmachine: (addons-412183) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183 (perms=drwx------)
	I0429 18:40:22.848415   15893 main.go:141] libmachine: (addons-412183) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 18:40:22.848423   15893 main.go:141] libmachine: (addons-412183) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 18:40:22.848432   15893 main.go:141] libmachine: (addons-412183) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 18:40:22.848440   15893 main.go:141] libmachine: (addons-412183) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 18:40:22.848447   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:40:22.848455   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 18:40:22.848464   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 18:40:22.848473   15893 main.go:141] libmachine: (addons-412183) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 18:40:22.848479   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home/jenkins
	I0429 18:40:22.848489   15893 main.go:141] libmachine: (addons-412183) DBG | Checking permissions on dir: /home
	I0429 18:40:22.848495   15893 main.go:141] libmachine: (addons-412183) DBG | Skipping /home - not owner
	I0429 18:40:22.848502   15893 main.go:141] libmachine: (addons-412183) Creating domain...
	I0429 18:40:22.849830   15893 main.go:141] libmachine: (addons-412183) define libvirt domain using xml: 
	I0429 18:40:22.849864   15893 main.go:141] libmachine: (addons-412183) <domain type='kvm'>
	I0429 18:40:22.849875   15893 main.go:141] libmachine: (addons-412183)   <name>addons-412183</name>
	I0429 18:40:22.849893   15893 main.go:141] libmachine: (addons-412183)   <memory unit='MiB'>4000</memory>
	I0429 18:40:22.849904   15893 main.go:141] libmachine: (addons-412183)   <vcpu>2</vcpu>
	I0429 18:40:22.849919   15893 main.go:141] libmachine: (addons-412183)   <features>
	I0429 18:40:22.849931   15893 main.go:141] libmachine: (addons-412183)     <acpi/>
	I0429 18:40:22.849941   15893 main.go:141] libmachine: (addons-412183)     <apic/>
	I0429 18:40:22.849950   15893 main.go:141] libmachine: (addons-412183)     <pae/>
	I0429 18:40:22.849962   15893 main.go:141] libmachine: (addons-412183)     
	I0429 18:40:22.849974   15893 main.go:141] libmachine: (addons-412183)   </features>
	I0429 18:40:22.849993   15893 main.go:141] libmachine: (addons-412183)   <cpu mode='host-passthrough'>
	I0429 18:40:22.850023   15893 main.go:141] libmachine: (addons-412183)   
	I0429 18:40:22.850048   15893 main.go:141] libmachine: (addons-412183)   </cpu>
	I0429 18:40:22.850058   15893 main.go:141] libmachine: (addons-412183)   <os>
	I0429 18:40:22.850088   15893 main.go:141] libmachine: (addons-412183)     <type>hvm</type>
	I0429 18:40:22.850099   15893 main.go:141] libmachine: (addons-412183)     <boot dev='cdrom'/>
	I0429 18:40:22.850110   15893 main.go:141] libmachine: (addons-412183)     <boot dev='hd'/>
	I0429 18:40:22.850120   15893 main.go:141] libmachine: (addons-412183)     <bootmenu enable='no'/>
	I0429 18:40:22.850129   15893 main.go:141] libmachine: (addons-412183)   </os>
	I0429 18:40:22.850138   15893 main.go:141] libmachine: (addons-412183)   <devices>
	I0429 18:40:22.850150   15893 main.go:141] libmachine: (addons-412183)     <disk type='file' device='cdrom'>
	I0429 18:40:22.850176   15893 main.go:141] libmachine: (addons-412183)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/boot2docker.iso'/>
	I0429 18:40:22.850192   15893 main.go:141] libmachine: (addons-412183)       <target dev='hdc' bus='scsi'/>
	I0429 18:40:22.850198   15893 main.go:141] libmachine: (addons-412183)       <readonly/>
	I0429 18:40:22.850205   15893 main.go:141] libmachine: (addons-412183)     </disk>
	I0429 18:40:22.850212   15893 main.go:141] libmachine: (addons-412183)     <disk type='file' device='disk'>
	I0429 18:40:22.850220   15893 main.go:141] libmachine: (addons-412183)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 18:40:22.850228   15893 main.go:141] libmachine: (addons-412183)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/addons-412183.rawdisk'/>
	I0429 18:40:22.850236   15893 main.go:141] libmachine: (addons-412183)       <target dev='hda' bus='virtio'/>
	I0429 18:40:22.850242   15893 main.go:141] libmachine: (addons-412183)     </disk>
	I0429 18:40:22.850249   15893 main.go:141] libmachine: (addons-412183)     <interface type='network'>
	I0429 18:40:22.850255   15893 main.go:141] libmachine: (addons-412183)       <source network='mk-addons-412183'/>
	I0429 18:40:22.850260   15893 main.go:141] libmachine: (addons-412183)       <model type='virtio'/>
	I0429 18:40:22.850266   15893 main.go:141] libmachine: (addons-412183)     </interface>
	I0429 18:40:22.850276   15893 main.go:141] libmachine: (addons-412183)     <interface type='network'>
	I0429 18:40:22.850282   15893 main.go:141] libmachine: (addons-412183)       <source network='default'/>
	I0429 18:40:22.850292   15893 main.go:141] libmachine: (addons-412183)       <model type='virtio'/>
	I0429 18:40:22.850297   15893 main.go:141] libmachine: (addons-412183)     </interface>
	I0429 18:40:22.850304   15893 main.go:141] libmachine: (addons-412183)     <serial type='pty'>
	I0429 18:40:22.850326   15893 main.go:141] libmachine: (addons-412183)       <target port='0'/>
	I0429 18:40:22.850346   15893 main.go:141] libmachine: (addons-412183)     </serial>
	I0429 18:40:22.850361   15893 main.go:141] libmachine: (addons-412183)     <console type='pty'>
	I0429 18:40:22.850374   15893 main.go:141] libmachine: (addons-412183)       <target type='serial' port='0'/>
	I0429 18:40:22.850388   15893 main.go:141] libmachine: (addons-412183)     </console>
	I0429 18:40:22.850398   15893 main.go:141] libmachine: (addons-412183)     <rng model='virtio'>
	I0429 18:40:22.850414   15893 main.go:141] libmachine: (addons-412183)       <backend model='random'>/dev/random</backend>
	I0429 18:40:22.850432   15893 main.go:141] libmachine: (addons-412183)     </rng>
	I0429 18:40:22.850446   15893 main.go:141] libmachine: (addons-412183)     
	I0429 18:40:22.850457   15893 main.go:141] libmachine: (addons-412183)     
	I0429 18:40:22.850468   15893 main.go:141] libmachine: (addons-412183)   </devices>
	I0429 18:40:22.850479   15893 main.go:141] libmachine: (addons-412183) </domain>
	I0429 18:40:22.850493   15893 main.go:141] libmachine: (addons-412183) 
	I0429 18:40:22.856527   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:88:fc:c5 in network default
	I0429 18:40:22.857078   15893 main.go:141] libmachine: (addons-412183) Ensuring networks are active...
	I0429 18:40:22.857098   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:22.857702   15893 main.go:141] libmachine: (addons-412183) Ensuring network default is active
	I0429 18:40:22.857949   15893 main.go:141] libmachine: (addons-412183) Ensuring network mk-addons-412183 is active
	I0429 18:40:22.858418   15893 main.go:141] libmachine: (addons-412183) Getting domain xml...
	I0429 18:40:22.859144   15893 main.go:141] libmachine: (addons-412183) Creating domain...
	I0429 18:40:24.214001   15893 main.go:141] libmachine: (addons-412183) Waiting to get IP...
	I0429 18:40:24.214795   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:24.215152   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:24.215181   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:24.215143   15915 retry.go:31] will retry after 288.194622ms: waiting for machine to come up
	I0429 18:40:24.504738   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:24.505124   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:24.505148   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:24.505089   15915 retry.go:31] will retry after 245.840505ms: waiting for machine to come up
	I0429 18:40:24.752573   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:24.752929   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:24.752958   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:24.752895   15915 retry.go:31] will retry after 484.478167ms: waiting for machine to come up
	I0429 18:40:25.238615   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:25.238999   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:25.239025   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:25.238958   15915 retry.go:31] will retry after 474.929578ms: waiting for machine to come up
	I0429 18:40:25.715549   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:25.715870   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:25.715897   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:25.715820   15915 retry.go:31] will retry after 711.577824ms: waiting for machine to come up
	I0429 18:40:26.428691   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:26.429226   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:26.429257   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:26.429210   15915 retry.go:31] will retry after 704.057958ms: waiting for machine to come up
	I0429 18:40:27.134378   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:27.134698   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:27.134730   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:27.134646   15915 retry.go:31] will retry after 804.442246ms: waiting for machine to come up
	I0429 18:40:27.940759   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:27.941079   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:27.941110   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:27.941036   15915 retry.go:31] will retry after 1.318337249s: waiting for machine to come up
	I0429 18:40:29.261464   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:29.261881   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:29.261903   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:29.261851   15915 retry.go:31] will retry after 1.371381026s: waiting for machine to come up
	I0429 18:40:30.634325   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:30.634655   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:30.634718   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:30.634627   15915 retry.go:31] will retry after 2.146502423s: waiting for machine to come up
	I0429 18:40:32.782976   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:32.783473   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:32.783502   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:32.783429   15915 retry.go:31] will retry after 2.393799937s: waiting for machine to come up
	I0429 18:40:35.180130   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:35.180570   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:35.180618   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:35.180554   15915 retry.go:31] will retry after 3.630272395s: waiting for machine to come up
	I0429 18:40:38.812364   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:38.812741   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:38.812771   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:38.812690   15915 retry.go:31] will retry after 3.982338564s: waiting for machine to come up
	I0429 18:40:42.796447   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:42.796831   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find current IP address of domain addons-412183 in network mk-addons-412183
	I0429 18:40:42.796858   15893 main.go:141] libmachine: (addons-412183) DBG | I0429 18:40:42.796769   15915 retry.go:31] will retry after 5.362319181s: waiting for machine to come up
	I0429 18:40:48.160567   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.160897   15893 main.go:141] libmachine: (addons-412183) Found IP for machine: 192.168.39.105
	I0429 18:40:48.160928   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has current primary IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.160942   15893 main.go:141] libmachine: (addons-412183) Reserving static IP address...
	I0429 18:40:48.161249   15893 main.go:141] libmachine: (addons-412183) DBG | unable to find host DHCP lease matching {name: "addons-412183", mac: "52:54:00:ae:0f:aa", ip: "192.168.39.105"} in network mk-addons-412183
	I0429 18:40:48.233647   15893 main.go:141] libmachine: (addons-412183) DBG | Getting to WaitForSSH function...
	I0429 18:40:48.233689   15893 main.go:141] libmachine: (addons-412183) Reserved static IP address: 192.168.39.105
	I0429 18:40:48.233702   15893 main.go:141] libmachine: (addons-412183) Waiting for SSH to be available...
	I0429 18:40:48.236113   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.236470   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.236496   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.236715   15893 main.go:141] libmachine: (addons-412183) DBG | Using SSH client type: external
	I0429 18:40:48.236745   15893 main.go:141] libmachine: (addons-412183) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa (-rw-------)
	I0429 18:40:48.236778   15893 main.go:141] libmachine: (addons-412183) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 18:40:48.236794   15893 main.go:141] libmachine: (addons-412183) DBG | About to run SSH command:
	I0429 18:40:48.236814   15893 main.go:141] libmachine: (addons-412183) DBG | exit 0
	I0429 18:40:48.366530   15893 main.go:141] libmachine: (addons-412183) DBG | SSH cmd err, output: <nil>: 
	I0429 18:40:48.366783   15893 main.go:141] libmachine: (addons-412183) KVM machine creation complete!
	I0429 18:40:48.367148   15893 main.go:141] libmachine: (addons-412183) Calling .GetConfigRaw
	I0429 18:40:48.367724   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:48.367929   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:48.368127   15893 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 18:40:48.368143   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:40:48.369258   15893 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 18:40:48.369272   15893 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 18:40:48.369278   15893 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 18:40:48.369284   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:48.371568   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.371944   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.371985   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.372073   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:48.372239   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.372383   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.372521   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:48.372646   15893 main.go:141] libmachine: Using SSH client type: native
	I0429 18:40:48.372857   15893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0429 18:40:48.372874   15893 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 18:40:48.477953   15893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 18:40:48.477984   15893 main.go:141] libmachine: Detecting the provisioner...
	I0429 18:40:48.477992   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:48.480945   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.481292   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.481324   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.481504   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:48.481712   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.481845   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.482020   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:48.482221   15893 main.go:141] libmachine: Using SSH client type: native
	I0429 18:40:48.482384   15893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0429 18:40:48.482395   15893 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 18:40:48.587366   15893 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 18:40:48.587469   15893 main.go:141] libmachine: found compatible host: buildroot
	I0429 18:40:48.587485   15893 main.go:141] libmachine: Provisioning with buildroot...
	I0429 18:40:48.587500   15893 main.go:141] libmachine: (addons-412183) Calling .GetMachineName
	I0429 18:40:48.587785   15893 buildroot.go:166] provisioning hostname "addons-412183"
	I0429 18:40:48.587809   15893 main.go:141] libmachine: (addons-412183) Calling .GetMachineName
	I0429 18:40:48.588009   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:48.590423   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.590744   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.590770   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.590872   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:48.591061   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.591208   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.591346   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:48.591492   15893 main.go:141] libmachine: Using SSH client type: native
	I0429 18:40:48.591644   15893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0429 18:40:48.591655   15893 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-412183 && echo "addons-412183" | sudo tee /etc/hostname
	I0429 18:40:48.716205   15893 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-412183
	
	I0429 18:40:48.716232   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:48.718545   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.718958   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.718981   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.719162   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:48.719347   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.719493   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.719651   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:48.719824   15893 main.go:141] libmachine: Using SSH client type: native
	I0429 18:40:48.719980   15893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0429 18:40:48.719995   15893 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-412183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-412183/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-412183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 18:40:48.832822   15893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 18:40:48.832848   15893 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 18:40:48.832901   15893 buildroot.go:174] setting up certificates
	I0429 18:40:48.832922   15893 provision.go:84] configureAuth start
	I0429 18:40:48.832934   15893 main.go:141] libmachine: (addons-412183) Calling .GetMachineName
	I0429 18:40:48.833212   15893 main.go:141] libmachine: (addons-412183) Calling .GetIP
	I0429 18:40:48.835702   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.836005   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.836027   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.836205   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:48.838491   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.838833   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.838857   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.839008   15893 provision.go:143] copyHostCerts
	I0429 18:40:48.839075   15893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 18:40:48.839217   15893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 18:40:48.839299   15893 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 18:40:48.839382   15893 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.addons-412183 san=[127.0.0.1 192.168.39.105 addons-412183 localhost minikube]
	I0429 18:40:48.904456   15893 provision.go:177] copyRemoteCerts
	I0429 18:40:48.904510   15893 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 18:40:48.904531   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:48.907044   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.907353   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:48.907376   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:48.907533   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:48.907723   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:48.907885   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:48.908000   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:40:48.989519   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 18:40:49.017669   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 18:40:49.044958   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 18:40:49.072162   15893 provision.go:87] duration metric: took 239.219762ms to configureAuth
	I0429 18:40:49.072191   15893 buildroot.go:189] setting minikube options for container-runtime
	I0429 18:40:49.072396   15893 config.go:182] Loaded profile config "addons-412183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:40:49.072483   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:49.074946   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.075265   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.075294   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.075482   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:49.075659   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.075820   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.075924   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:49.076070   15893 main.go:141] libmachine: Using SSH client type: native
	I0429 18:40:49.076226   15893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0429 18:40:49.076241   15893 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 18:40:49.350495   15893 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 18:40:49.350516   15893 main.go:141] libmachine: Checking connection to Docker...
	I0429 18:40:49.350523   15893 main.go:141] libmachine: (addons-412183) Calling .GetURL
	I0429 18:40:49.351929   15893 main.go:141] libmachine: (addons-412183) DBG | Using libvirt version 6000000
	I0429 18:40:49.354022   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.354364   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.354397   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.354581   15893 main.go:141] libmachine: Docker is up and running!
	I0429 18:40:49.354595   15893 main.go:141] libmachine: Reticulating splines...
	I0429 18:40:49.354602   15893 client.go:171] duration metric: took 27.484392148s to LocalClient.Create
	I0429 18:40:49.354629   15893 start.go:167] duration metric: took 27.48445816s to libmachine.API.Create "addons-412183"
	I0429 18:40:49.354643   15893 start.go:293] postStartSetup for "addons-412183" (driver="kvm2")
	I0429 18:40:49.354655   15893 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 18:40:49.354677   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:49.354886   15893 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 18:40:49.354920   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:49.357108   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.357466   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.357494   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.357640   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:49.357805   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.357929   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:49.358036   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:40:49.441961   15893 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 18:40:49.447082   15893 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 18:40:49.447107   15893 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 18:40:49.447182   15893 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 18:40:49.447213   15893 start.go:296] duration metric: took 92.5635ms for postStartSetup
	I0429 18:40:49.447263   15893 main.go:141] libmachine: (addons-412183) Calling .GetConfigRaw
	I0429 18:40:49.447816   15893 main.go:141] libmachine: (addons-412183) Calling .GetIP
	I0429 18:40:49.450194   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.450546   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.450584   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.450749   15893 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/config.json ...
	I0429 18:40:49.450922   15893 start.go:128] duration metric: took 27.598497909s to createHost
	I0429 18:40:49.450948   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:49.453199   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.453566   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.453599   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.453726   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:49.453865   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.453991   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.454116   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:49.454237   15893 main.go:141] libmachine: Using SSH client type: native
	I0429 18:40:49.454427   15893 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0429 18:40:49.454439   15893 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 18:40:49.559223   15893 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714416049.547431399
	
	I0429 18:40:49.559246   15893 fix.go:216] guest clock: 1714416049.547431399
	I0429 18:40:49.559253   15893 fix.go:229] Guest: 2024-04-29 18:40:49.547431399 +0000 UTC Remote: 2024-04-29 18:40:49.450933922 +0000 UTC m=+27.711157503 (delta=96.497477ms)
	I0429 18:40:49.559295   15893 fix.go:200] guest clock delta is within tolerance: 96.497477ms
	I0429 18:40:49.559302   15893 start.go:83] releasing machines lock for "addons-412183", held for 27.706957406s
	I0429 18:40:49.559322   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:49.559563   15893 main.go:141] libmachine: (addons-412183) Calling .GetIP
	I0429 18:40:49.561992   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.562344   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.562365   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.562490   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:49.562972   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:49.563111   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:40:49.563201   15893 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 18:40:49.563244   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:49.563287   15893 ssh_runner.go:195] Run: cat /version.json
	I0429 18:40:49.563307   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:40:49.565464   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.565633   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.565754   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.565777   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.565910   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:49.566023   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:49.566036   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.566058   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:49.566194   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:49.566206   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:40:49.566369   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:40:49.566371   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:40:49.566534   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:40:49.566656   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:40:49.668171   15893 ssh_runner.go:195] Run: systemctl --version
	I0429 18:40:49.674676   15893 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 18:40:49.836391   15893 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 18:40:49.843360   15893 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 18:40:49.843428   15893 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 18:40:49.862567   15893 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 18:40:49.862593   15893 start.go:494] detecting cgroup driver to use...
	I0429 18:40:49.862648   15893 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 18:40:49.879687   15893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 18:40:49.895108   15893 docker.go:217] disabling cri-docker service (if available) ...
	I0429 18:40:49.895169   15893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 18:40:49.910287   15893 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 18:40:49.925348   15893 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 18:40:50.048526   15893 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 18:40:50.195718   15893 docker.go:233] disabling docker service ...
	I0429 18:40:50.195788   15893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 18:40:50.210506   15893 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 18:40:50.224510   15893 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 18:40:50.375508   15893 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 18:40:50.503250   15893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 18:40:50.518644   15893 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 18:40:50.539610   15893 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 18:40:50.539676   15893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.551371   15893 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 18:40:50.551438   15893 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.563221   15893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.574936   15893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.586668   15893 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 18:40:50.599929   15893 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.612816   15893 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.633021   15893 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:40:50.644592   15893 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 18:40:50.654522   15893 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 18:40:50.654580   15893 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 18:40:50.669083   15893 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 18:40:50.680207   15893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:40:50.826021   15893 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 18:40:50.981738   15893 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 18:40:50.981820   15893 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 18:40:50.987042   15893 start.go:562] Will wait 60s for crictl version
	I0429 18:40:50.987136   15893 ssh_runner.go:195] Run: which crictl
	I0429 18:40:50.991562   15893 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 18:40:51.033836   15893 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 18:40:51.033962   15893 ssh_runner.go:195] Run: crio --version
	I0429 18:40:51.063703   15893 ssh_runner.go:195] Run: crio --version
	I0429 18:40:51.097696   15893 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 18:40:51.099168   15893 main.go:141] libmachine: (addons-412183) Calling .GetIP
	I0429 18:40:51.101786   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:51.102130   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:40:51.102154   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:40:51.102336   15893 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 18:40:51.106850   15893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 18:40:51.121445   15893 kubeadm.go:877] updating cluster {Name:addons-412183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-412183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 18:40:51.121569   15893 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 18:40:51.121638   15893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 18:40:51.157154   15893 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 18:40:51.157220   15893 ssh_runner.go:195] Run: which lz4
	I0429 18:40:51.161632   15893 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 18:40:51.166349   15893 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 18:40:51.166379   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 18:40:52.775195   15893 crio.go:462] duration metric: took 1.613587682s to copy over tarball
	I0429 18:40:52.775271   15893 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 18:40:55.453183   15893 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.677878186s)
	I0429 18:40:55.453225   15893 crio.go:469] duration metric: took 2.677995586s to extract the tarball
	I0429 18:40:55.453234   15893 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 18:40:55.493518   15893 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 18:40:55.540670   15893 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 18:40:55.540695   15893 cache_images.go:84] Images are preloaded, skipping loading
	I0429 18:40:55.540714   15893 kubeadm.go:928] updating node { 192.168.39.105 8443 v1.30.0 crio true true} ...
	I0429 18:40:55.540856   15893 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-412183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-412183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 18:40:55.540921   15893 ssh_runner.go:195] Run: crio config
	I0429 18:40:55.587090   15893 cni.go:84] Creating CNI manager for ""
	I0429 18:40:55.587116   15893 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 18:40:55.587127   15893 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 18:40:55.587146   15893 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.105 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-412183 NodeName:addons-412183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 18:40:55.587292   15893 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-412183"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 18:40:55.587347   15893 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 18:40:55.599220   15893 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 18:40:55.599328   15893 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 18:40:55.610609   15893 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0429 18:40:55.629922   15893 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 18:40:55.649794   15893 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0429 18:40:55.668119   15893 ssh_runner.go:195] Run: grep 192.168.39.105	control-plane.minikube.internal$ /etc/hosts
	I0429 18:40:55.672376   15893 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 18:40:55.686104   15893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:40:55.819811   15893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 18:40:55.841760   15893 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183 for IP: 192.168.39.105
	I0429 18:40:55.841785   15893 certs.go:194] generating shared ca certs ...
	I0429 18:40:55.841800   15893 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:55.841931   15893 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 18:40:56.018106   15893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt ...
	I0429 18:40:56.018134   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt: {Name:mk1a90f1f1cee68ee2944530d90bce20d77faff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.018281   15893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key ...
	I0429 18:40:56.018291   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key: {Name:mk8c549bc46400cd1867a972d6452fc361e7555c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.018358   15893 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 18:40:56.243415   15893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt ...
	I0429 18:40:56.243446   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt: {Name:mk037f9ed9a0ba0db804d2da948eeaadeb55e807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.243592   15893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key ...
	I0429 18:40:56.243602   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key: {Name:mk9eca9dab20265def7e00d5b3901d053a7e6b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.243670   15893 certs.go:256] generating profile certs ...
	I0429 18:40:56.243729   15893 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.key
	I0429 18:40:56.243743   15893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt with IP's: []
	I0429 18:40:56.427080   15893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt ...
	I0429 18:40:56.427110   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: {Name:mk45d4f3b66b94530d94e119121be0e39708fbd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.427258   15893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.key ...
	I0429 18:40:56.427268   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.key: {Name:mkbfbe12272f10cea48b7ddf6c1b1f5fe0611db9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.427332   15893 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.key.7d7f4af1
	I0429 18:40:56.427349   15893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.crt.7d7f4af1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.105]
	I0429 18:40:56.564420   15893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.crt.7d7f4af1 ...
	I0429 18:40:56.564458   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.crt.7d7f4af1: {Name:mkbc4ad6ce5f1f28dc2d8233d39abccb1153c632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.564606   15893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.key.7d7f4af1 ...
	I0429 18:40:56.564619   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.key.7d7f4af1: {Name:mkdffc6c3c88557574c00993aadbb459913af94f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.564691   15893 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.crt.7d7f4af1 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.crt
	I0429 18:40:56.564757   15893 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.key.7d7f4af1 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.key
	I0429 18:40:56.564800   15893 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.key
	I0429 18:40:56.564815   15893 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.crt with IP's: []
	I0429 18:40:56.694779   15893 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.crt ...
	I0429 18:40:56.694808   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.crt: {Name:mkf02c29d4dee44c6646830909239c091b8389a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.694971   15893 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.key ...
	I0429 18:40:56.694982   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.key: {Name:mk713a023302a5a8d96afc62463fc93cb9b4c09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:40:56.695144   15893 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 18:40:56.695179   15893 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 18:40:56.695211   15893 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 18:40:56.695234   15893 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 18:40:56.695792   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 18:40:56.743184   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 18:40:56.777073   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 18:40:56.808425   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 18:40:56.981611   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 18:40:57.015328   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 18:40:57.043949   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 18:40:57.071639   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 18:40:57.099014   15893 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 18:40:57.126315   15893 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 18:40:57.145314   15893 ssh_runner.go:195] Run: openssl version
	I0429 18:40:57.152645   15893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 18:40:57.166077   15893 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:40:57.171455   15893 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:40:57.171510   15893 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:40:57.177897   15893 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 18:40:57.190953   15893 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 18:40:57.196071   15893 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 18:40:57.196131   15893 kubeadm.go:391] StartCluster: {Name:addons-412183 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-412183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:40:57.196217   15893 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 18:40:57.196274   15893 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 18:40:57.248603   15893 cri.go:89] found id: ""
	I0429 18:40:57.248681   15893 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 18:40:57.262452   15893 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 18:40:57.275795   15893 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 18:40:57.289005   15893 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 18:40:57.289026   15893 kubeadm.go:156] found existing configuration files:
	
	I0429 18:40:57.289067   15893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 18:40:57.301793   15893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 18:40:57.301867   15893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 18:40:57.313045   15893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 18:40:57.325893   15893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 18:40:57.325948   15893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 18:40:57.339064   15893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 18:40:57.351680   15893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 18:40:57.351741   15893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 18:40:57.365076   15893 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 18:40:57.375858   15893 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 18:40:57.375914   15893 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 18:40:57.392796   15893 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 18:40:57.476884   15893 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 18:40:57.477006   15893 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 18:40:57.606677   15893 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 18:40:57.606831   15893 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 18:40:57.606954   15893 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 18:40:57.840308   15893 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 18:40:57.842925   15893 out.go:204]   - Generating certificates and keys ...
	I0429 18:40:57.843032   15893 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 18:40:57.843094   15893 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 18:40:57.896000   15893 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 18:40:57.960496   15893 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 18:40:58.086864   15893 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 18:40:58.268463   15893 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 18:40:58.422194   15893 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 18:40:58.422522   15893 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-412183 localhost] and IPs [192.168.39.105 127.0.0.1 ::1]
	I0429 18:40:58.719479   15893 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 18:40:58.719688   15893 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-412183 localhost] and IPs [192.168.39.105 127.0.0.1 ::1]
	I0429 18:40:58.965382   15893 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 18:40:59.500473   15893 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 18:40:59.714871   15893 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 18:40:59.715115   15893 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 18:40:59.789974   15893 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 18:41:00.127269   15893 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 18:41:00.336120   15893 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 18:41:00.510010   15893 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 18:41:00.731591   15893 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 18:41:00.732177   15893 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 18:41:00.734508   15893 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 18:41:00.736683   15893 out.go:204]   - Booting up control plane ...
	I0429 18:41:00.736765   15893 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 18:41:00.736873   15893 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 18:41:00.736967   15893 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 18:41:00.752765   15893 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 18:41:00.753753   15893 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 18:41:00.753884   15893 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 18:41:00.883211   15893 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 18:41:00.883299   15893 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 18:41:01.882675   15893 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001253216s
	I0429 18:41:01.882792   15893 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 18:41:06.883520   15893 kubeadm.go:309] [api-check] The API server is healthy after 5.001576531s
	I0429 18:41:06.896477   15893 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 18:41:06.915477   15893 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 18:41:06.945331   15893 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 18:41:06.945540   15893 kubeadm.go:309] [mark-control-plane] Marking the node addons-412183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 18:41:06.963141   15893 kubeadm.go:309] [bootstrap-token] Using token: tncb7l.y1ni0jeig8r3do1i
	I0429 18:41:06.964664   15893 out.go:204]   - Configuring RBAC rules ...
	I0429 18:41:06.964804   15893 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 18:41:06.970908   15893 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 18:41:06.982087   15893 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 18:41:06.985568   15893 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 18:41:06.988922   15893 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 18:41:06.992743   15893 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 18:41:07.290804   15893 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 18:41:07.746972   15893 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 18:41:08.290334   15893 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 18:41:08.291207   15893 kubeadm.go:309] 
	I0429 18:41:08.291298   15893 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 18:41:08.291318   15893 kubeadm.go:309] 
	I0429 18:41:08.291414   15893 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 18:41:08.291423   15893 kubeadm.go:309] 
	I0429 18:41:08.291460   15893 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 18:41:08.291523   15893 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 18:41:08.291604   15893 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 18:41:08.291619   15893 kubeadm.go:309] 
	I0429 18:41:08.291681   15893 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 18:41:08.291695   15893 kubeadm.go:309] 
	I0429 18:41:08.291770   15893 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 18:41:08.291782   15893 kubeadm.go:309] 
	I0429 18:41:08.291861   15893 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 18:41:08.291970   15893 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 18:41:08.292069   15893 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 18:41:08.292077   15893 kubeadm.go:309] 
	I0429 18:41:08.292191   15893 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 18:41:08.292320   15893 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 18:41:08.292339   15893 kubeadm.go:309] 
	I0429 18:41:08.292451   15893 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token tncb7l.y1ni0jeig8r3do1i \
	I0429 18:41:08.292613   15893 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 18:41:08.292647   15893 kubeadm.go:309] 	--control-plane 
	I0429 18:41:08.292662   15893 kubeadm.go:309] 
	I0429 18:41:08.292799   15893 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 18:41:08.292812   15893 kubeadm.go:309] 
	I0429 18:41:08.292916   15893 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token tncb7l.y1ni0jeig8r3do1i \
	I0429 18:41:08.293051   15893 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 18:41:08.293448   15893 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 18:41:08.293479   15893 cni.go:84] Creating CNI manager for ""
	I0429 18:41:08.293488   15893 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 18:41:08.296216   15893 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 18:41:08.297502   15893 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 18:41:08.319993   15893 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 18:41:08.343987   15893 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 18:41:08.344112   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:08.344137   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-412183 minikube.k8s.io/updated_at=2024_04_29T18_41_08_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=addons-412183 minikube.k8s.io/primary=true
	I0429 18:41:08.377561   15893 ops.go:34] apiserver oom_adj: -16
	I0429 18:41:08.556962   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:09.057203   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:09.557617   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:10.057573   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:10.557307   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:11.057396   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:11.557156   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:12.057282   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:12.557300   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:13.057782   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:13.558021   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:14.057394   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:14.557767   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:15.057547   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:15.557067   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:16.057919   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:16.557626   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:17.057892   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:17.557030   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:18.057967   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:18.557612   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:19.057819   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:19.557033   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:20.057140   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:20.557148   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:21.057435   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:21.557542   15893 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:41:21.721313   15893 kubeadm.go:1107] duration metric: took 13.377266125s to wait for elevateKubeSystemPrivileges
	W0429 18:41:21.721349   15893 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 18:41:21.721357   15893 kubeadm.go:393] duration metric: took 24.525231154s to StartCluster
	I0429 18:41:21.721373   15893 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:41:21.721494   15893 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:41:21.721842   15893 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:41:21.722024   15893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 18:41:21.722040   15893 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0429 18:41:21.722023   15893 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 18:41:21.723947   15893 out.go:177] * Verifying Kubernetes components...
	I0429 18:41:21.722154   15893 addons.go:69] Setting yakd=true in profile "addons-412183"
	I0429 18:41:21.722161   15893 addons.go:69] Setting cloud-spanner=true in profile "addons-412183"
	I0429 18:41:21.722166   15893 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-412183"
	I0429 18:41:21.722170   15893 addons.go:69] Setting default-storageclass=true in profile "addons-412183"
	I0429 18:41:21.722174   15893 addons.go:69] Setting gcp-auth=true in profile "addons-412183"
	I0429 18:41:21.722186   15893 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-412183"
	I0429 18:41:21.722194   15893 addons.go:69] Setting registry=true in profile "addons-412183"
	I0429 18:41:21.722202   15893 addons.go:69] Setting storage-provisioner=true in profile "addons-412183"
	I0429 18:41:21.722210   15893 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-412183"
	I0429 18:41:21.722217   15893 addons.go:69] Setting volumesnapshots=true in profile "addons-412183"
	I0429 18:41:21.722211   15893 addons.go:69] Setting metrics-server=true in profile "addons-412183"
	I0429 18:41:21.722209   15893 addons.go:69] Setting ingress=true in profile "addons-412183"
	I0429 18:41:21.722224   15893 addons.go:69] Setting ingress-dns=true in profile "addons-412183"
	I0429 18:41:21.722217   15893 addons.go:69] Setting helm-tiller=true in profile "addons-412183"
	I0429 18:41:21.722229   15893 addons.go:69] Setting inspektor-gadget=true in profile "addons-412183"
	I0429 18:41:21.722241   15893 config.go:182] Loaded profile config "addons-412183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:41:21.725236   15893 mustload.go:65] Loading cluster: addons-412183
	I0429 18:41:21.725255   15893 addons.go:234] Setting addon registry=true in "addons-412183"
	I0429 18:41:21.725266   15893 addons.go:234] Setting addon volumesnapshots=true in "addons-412183"
	I0429 18:41:21.725271   15893 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-412183"
	I0429 18:41:21.725273   15893 addons.go:234] Setting addon yakd=true in "addons-412183"
	I0429 18:41:21.725293   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725295   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725295   15893 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-412183"
	I0429 18:41:21.725311   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725310   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725469   15893 config.go:182] Loaded profile config "addons-412183": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:41:21.725563   15893 addons.go:234] Setting addon ingress-dns=true in "addons-412183"
	I0429 18:41:21.725607   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725768   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725778   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725802   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.725813   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725818   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725832   15893 addons.go:234] Setting addon metrics-server=true in "addons-412183"
	I0429 18:41:21.725847   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.725861   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725867   15893 addons.go:234] Setting addon inspektor-gadget=true in "addons-412183"
	I0429 18:41:21.725890   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725940   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725973   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726130   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.726148   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726169   15893 addons.go:234] Setting addon storage-provisioner=true in "addons-412183"
	I0429 18:41:21.726199   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.726224   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726247   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725232   15893 addons.go:234] Setting addon cloud-spanner=true in "addons-412183"
	I0429 18:41:21.726277   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.726279   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726505   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.726526   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726597   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.725241   15893 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:41:21.726622   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726612   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726225   15893 addons.go:234] Setting addon ingress=true in "addons-412183"
	I0429 18:41:21.726787   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.726988   15893 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-412183"
	I0429 18:41:21.725805   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.727138   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.727155   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.726202   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.730509   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.730552   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.725850   15893 addons.go:234] Setting addon helm-tiller=true in "addons-412183"
	I0429 18:41:21.734386   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.725279   15893 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-412183"
	I0429 18:41:21.734571   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.734748   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.734774   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.734923   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.734953   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.747372   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I0429 18:41:21.747890   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.748245   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
	I0429 18:41:21.748416   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44505
	I0429 18:41:21.748470   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42429
	I0429 18:41:21.748662   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.748847   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.749077   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.749092   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.749111   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.749127   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.749161   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.749326   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.749348   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.750191   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.750211   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.750270   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.750330   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.750342   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.750827   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.750866   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.751139   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.751161   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.751169   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.758536   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.758578   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.758663   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.758696   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.758939   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.758957   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.762910   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0429 18:41:21.763501   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.764071   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41933
	I0429 18:41:21.764377   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.764391   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.764752   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.765309   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.765344   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.770161   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.770231   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37191
	I0429 18:41:21.770633   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.770982   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.770999   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.771118   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.771128   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.771458   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.772042   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.772078   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.777703   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I0429 18:41:21.777717   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.777705   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0429 18:41:21.778209   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.778453   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.778488   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.778734   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.778750   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.779115   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.779171   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.779367   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.783796   15893 addons.go:234] Setting addon default-storageclass=true in "addons-412183"
	I0429 18:41:21.783839   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.784190   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.784239   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.786363   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.787904   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.788262   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.790388   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0429 18:41:21.790506   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.790592   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.790872   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.791497   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.791513   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.791857   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.792379   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.792415   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.800584   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35409
	I0429 18:41:21.801196   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.801927   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.801950   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.803023   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.805600   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.807505   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.809412   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0429 18:41:21.810712   15893 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0429 18:41:21.810730   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0429 18:41:21.810752   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.810847   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0429 18:41:21.810926   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
	I0429 18:41:21.811343   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.811345   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.811869   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.811889   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.812031   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.812043   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.812413   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.812630   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.813650   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.814537   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.814579   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.818366   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.818375   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0429 18:41:21.818404   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.818369   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.818427   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.818445   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.818798   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.818841   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.818908   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.819174   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.819344   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.819357   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.819798   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.819822   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.819967   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.820263   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42807
	I0429 18:41:21.820639   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.820682   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.820767   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42779
	I0429 18:41:21.821161   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.821227   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.821694   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.821713   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.821999   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34967
	I0429 18:41:21.822183   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.822725   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.822770   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.823047   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.823258   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0429 18:41:21.823615   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.823631   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.824003   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.824213   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41701
	I0429 18:41:21.824405   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.824743   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.824764   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.825078   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.825620   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.825637   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.825693   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.826002   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.826113   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.828294   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0429 18:41:21.828339   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32771
	I0429 18:41:21.826874   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.827008   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.826557   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.829460   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0429 18:41:21.829675   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.829688   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0429 18:41:21.831041   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0429 18:41:21.830121   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.830241   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.830345   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.830431   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.831527   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.833757   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0429 18:41:21.832960   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.833228   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.833740   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36267
	I0429 18:41:21.834609   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.834994   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.836104   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0429 18:41:21.837455   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0429 18:41:21.838601   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0429 18:41:21.837456   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0429 18:41:21.837486   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.837202   15893 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0429 18:41:21.837515   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.837188   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.837866   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.842857   15893 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0429 18:41:21.843878   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0429 18:41:21.843892   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0429 18:41:21.843913   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.840207   15893 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-412183"
	I0429 18:41:21.843993   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:21.844382   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.844421   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.844471   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0429 18:41:21.844585   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46283
	I0429 18:41:21.844589   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.844608   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.844674   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.844717   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.845130   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.845184   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.845221   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.845734   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.846305   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.846344   15893 out.go:177]   - Using image docker.io/registry:2.8.3
	I0429 18:41:21.847434   15893 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0429 18:41:21.847403   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.845668   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.846636   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.847133   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.847422   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0429 18:41:21.848209   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.848498   15893 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 18:41:21.849541   15893 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 18:41:21.851047   15893 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 18:41:21.851064   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0429 18:41:21.851082   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.849601   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.851145   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.851168   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.848580   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.848635   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.849227   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.851224   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.852885   15893 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0429 18:41:21.848524   15893 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0429 18:41:21.854386   15893 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0429 18:41:21.854401   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0429 18:41:21.850490   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.854420   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.850680   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.851582   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.852077   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.852476   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.852912   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0429 18:41:21.854517   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.850153   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.855179   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.855214   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.855273   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.856477   15893 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0429 18:41:21.855536   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.855567   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.856517   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.855689   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.856237   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.856614   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.857934   15893 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 18:41:21.857952   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0429 18:41:21.857966   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.856750   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.856907   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.857090   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.857545   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.857687   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.859236   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.859252   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.859334   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.859376   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.860206   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40347
	I0429 18:41:21.861708   15893 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0429 18:41:21.860417   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.860666   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.860710   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.860863   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.861157   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.861633   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.861685   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.862295   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45773
	I0429 18:41:21.862708   15893 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0429 18:41:21.863717   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0429 18:41:21.863747   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.863345   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0429 18:41:21.863781   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.863696   15893 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0429 18:41:21.866259   15893 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 18:41:21.866272   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0429 18:41:21.866285   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.863504   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.866328   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.866347   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.863928   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.866359   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.863951   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.865352   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.865377   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.866400   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.867749   15893 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0429 18:41:21.865440   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.865446   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.865603   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.865655   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.866115   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.866980   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.867014   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.868978   15893 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 18:41:21.868991   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 18:41:21.869005   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.870272   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.870292   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.870272   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.871770   15893 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0429 18:41:21.870411   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.870412   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.870434   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.870431   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.870764   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.870790   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.871693   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.871870   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.872264   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0429 18:41:21.872540   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.873062   15893 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0429 18:41:21.873074   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0429 18:41:21.872871   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.873089   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.873119   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.873137   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.873159   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.873171   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.873180   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.873203   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.873703   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.873739   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.873812   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.873821   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.875229   15893 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.16
	I0429 18:41:21.873712   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.874594   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.875260   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.876738   15893 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0429 18:41:21.874616   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.876755   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0429 18:41:21.876771   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.874637   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.874789   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.875155   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.876857   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.876880   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.875461   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.875904   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.875953   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.877076   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.877117   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.877162   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.877347   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.877377   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.877539   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.877706   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.877756   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.878177   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.878198   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.878376   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.878597   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.878808   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.878946   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.879385   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.879596   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.879714   15893 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 18:41:21.879724   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 18:41:21.879738   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.879824   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37665
	I0429 18:41:21.881382   15893 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 18:41:21.880170   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.881203   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.881846   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.882782   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.882881   15893 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 18:41:21.882890   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 18:41:21.882900   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.882927   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.882948   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.883051   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.883220   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.883245   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.883267   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.883442   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.883624   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.883676   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.883771   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.883790   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.883878   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.884012   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.884239   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.884747   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:21.884771   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:21.885843   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.886171   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.886201   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.886285   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.886444   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.886586   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.886702   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:21.913521   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37089
	I0429 18:41:21.913880   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:21.914317   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:21.914343   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:21.914644   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:21.914824   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:21.916181   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:21.918233   15893 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0429 18:41:21.919669   15893 out.go:177]   - Using image docker.io/busybox:stable
	I0429 18:41:21.921026   15893 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 18:41:21.921048   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0429 18:41:21.921071   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:21.924047   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.924435   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:21.924466   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:21.924706   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:21.924892   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:21.925043   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:21.925218   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	W0429 18:41:21.932807   15893 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59772->192.168.39.105:22: read: connection reset by peer
	I0429 18:41:21.932841   15893 retry.go:31] will retry after 310.484288ms: ssh: handshake failed: read tcp 192.168.39.1:59772->192.168.39.105:22: read: connection reset by peer
	I0429 18:41:22.125012   15893 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 18:41:22.125238   15893 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 18:41:22.149904   15893 node_ready.go:35] waiting up to 6m0s for node "addons-412183" to be "Ready" ...
	I0429 18:41:22.153523   15893 node_ready.go:49] node "addons-412183" has status "Ready":"True"
	I0429 18:41:22.153543   15893 node_ready.go:38] duration metric: took 3.606629ms for node "addons-412183" to be "Ready" ...
	I0429 18:41:22.153551   15893 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 18:41:22.160646   15893 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace to be "Ready" ...
	I0429 18:41:22.235331   15893 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0429 18:41:22.235351   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0429 18:41:22.272339   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0429 18:41:22.273946   15893 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 18:41:22.273968   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0429 18:41:22.323899   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0429 18:41:22.323924   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0429 18:41:22.323922   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0429 18:41:22.328194   15893 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0429 18:41:22.328215   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0429 18:41:22.331910   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0429 18:41:22.334742   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0429 18:41:22.337692   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 18:41:22.357069   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 18:41:22.380840   15893 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0429 18:41:22.380862   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0429 18:41:22.394920   15893 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0429 18:41:22.394936   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0429 18:41:22.425132   15893 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0429 18:41:22.425154   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0429 18:41:22.428657   15893 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0429 18:41:22.428672   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0429 18:41:22.448111   15893 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 18:41:22.448130   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 18:41:22.491871   15893 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0429 18:41:22.491892   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0429 18:41:22.530852   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0429 18:41:22.530879   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0429 18:41:22.583204   15893 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 18:41:22.583238   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0429 18:41:22.632671   15893 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 18:41:22.632691   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 18:41:22.654837   15893 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0429 18:41:22.654860   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0429 18:41:22.660951   15893 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0429 18:41:22.660965   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0429 18:41:22.688738   15893 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0429 18:41:22.688763   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0429 18:41:22.731691   15893 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0429 18:41:22.731714   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0429 18:41:22.739466   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0429 18:41:22.739490   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0429 18:41:22.761664   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0429 18:41:22.834889   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0429 18:41:22.881900   15893 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0429 18:41:22.881934   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0429 18:41:22.885734   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0429 18:41:22.889297   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 18:41:22.923371   15893 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0429 18:41:22.923406   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0429 18:41:22.968770   15893 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0429 18:41:22.968794   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0429 18:41:22.976719   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0429 18:41:22.976738   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0429 18:41:23.044928   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0429 18:41:23.109976   15893 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0429 18:41:23.110011   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0429 18:41:23.258918   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0429 18:41:23.258940   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0429 18:41:23.292363   15893 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0429 18:41:23.292407   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0429 18:41:23.409090   15893 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0429 18:41:23.409117   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0429 18:41:23.578598   15893 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0429 18:41:23.578621   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0429 18:41:23.631783   15893 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 18:41:23.631802   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0429 18:41:23.663329   15893 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 18:41:23.663353   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0429 18:41:23.848788   15893 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.723515667s)
	I0429 18:41:23.848816   15893 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0429 18:41:23.871083   15893 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0429 18:41:23.871104   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0429 18:41:24.043826   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 18:41:24.052737   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0429 18:41:24.077668   15893 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0429 18:41:24.077696   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0429 18:41:24.175836   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:24.354341   15893 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-412183" context rescaled to 1 replicas
	I0429 18:41:24.494637   15893 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0429 18:41:24.494665   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0429 18:41:24.699452   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.427070065s)
	I0429 18:41:24.699505   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:24.699518   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:24.699840   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:24.699865   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:24.699876   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:24.699889   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:24.699897   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:24.700162   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:24.700168   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:24.700191   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:24.808157   15893 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 18:41:24.808181   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0429 18:41:25.054848   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0429 18:41:26.194291   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:28.247646   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:28.897992   15893 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0429 18:41:28.898029   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:28.901259   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:28.901723   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:28.901754   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:28.901954   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:28.902165   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:28.902322   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:28.902463   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:29.424833   15893 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0429 18:41:29.737797   15893 addons.go:234] Setting addon gcp-auth=true in "addons-412183"
	I0429 18:41:29.737919   15893 host.go:66] Checking if "addons-412183" exists ...
	I0429 18:41:29.738277   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:29.738311   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:29.755386   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I0429 18:41:29.755871   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:29.756385   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:29.756413   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:29.756732   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:29.757181   15893 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:41:29.757207   15893 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:41:29.773273   15893 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0429 18:41:29.773733   15893 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:41:29.774245   15893 main.go:141] libmachine: Using API Version  1
	I0429 18:41:29.774276   15893 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:41:29.774608   15893 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:41:29.774766   15893 main.go:141] libmachine: (addons-412183) Calling .GetState
	I0429 18:41:29.776406   15893 main.go:141] libmachine: (addons-412183) Calling .DriverName
	I0429 18:41:29.776620   15893 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0429 18:41:29.776641   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHHostname
	I0429 18:41:29.779418   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:29.779771   15893 main.go:141] libmachine: (addons-412183) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0f:aa", ip: ""} in network mk-addons-412183: {Iface:virbr1 ExpiryTime:2024-04-29 19:40:38 +0000 UTC Type:0 Mac:52:54:00:ae:0f:aa Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:addons-412183 Clientid:01:52:54:00:ae:0f:aa}
	I0429 18:41:29.779802   15893 main.go:141] libmachine: (addons-412183) DBG | domain addons-412183 has defined IP address 192.168.39.105 and MAC address 52:54:00:ae:0f:aa in network mk-addons-412183
	I0429 18:41:29.779981   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHPort
	I0429 18:41:29.780168   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHKeyPath
	I0429 18:41:29.780296   15893 main.go:141] libmachine: (addons-412183) Calling .GetSSHUsername
	I0429 18:41:29.780454   15893 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/addons-412183/id_rsa Username:docker}
	I0429 18:41:30.378080   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:31.852557   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.528599414s)
	I0429 18:41:31.852627   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.852645   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.852620   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.520679632s)
	I0429 18:41:31.852706   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.852714   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.852738   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.515029952s)
	I0429 18:41:31.853054   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.963734001s)
	I0429 18:41:31.853072   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853076   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853084   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.853089   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.852711   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.51794703s)
	I0429 18:41:31.853152   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853160   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.853163   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.80819231s)
	I0429 18:41:31.852819   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.495714452s)
	I0429 18:41:31.853191   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853199   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853202   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.853206   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.852839   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.853241   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.853250   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853257   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.852890   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.091200902s)
	I0429 18:41:31.853295   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853302   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.853347   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.809484017s)
	W0429 18:41:31.853377   15893 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 18:41:31.853399   15893 retry.go:31] will retry after 298.609511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0429 18:41:31.852942   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.018027685s)
	I0429 18:41:31.853466   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.852983   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.967224213s)
	I0429 18:41:31.853474   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.800670046s)
	I0429 18:41:31.853493   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853497   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.853502   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.853509   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.852990   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.853520   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.853529   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.852994   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.853017   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.853476   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.853536   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858097   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858107   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858118   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858131   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858136   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858136   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858147   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858154   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858162   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858172   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858176   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858179   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858183   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858188   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858211   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858215   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858225   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858233   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858235   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858240   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858245   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858254   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858267   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858172   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858275   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858283   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858288   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858293   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858311   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858332   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858150   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858345   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858350   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858355   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858360   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858246   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858272   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858369   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858283   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858140   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858360   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858387   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858377   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858398   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.858400   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858407   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.858416   15893 addons.go:470] Verifying addon ingress=true in "addons-412183"
	I0429 18:41:31.858469   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.861951   15893 out.go:177] * Verifying ingress addon...
	I0429 18:41:31.858388   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858497   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858555   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858605   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858391   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.858634   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858683   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858822   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858832   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858854   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858872   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.858885   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.858923   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.859269   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.859293   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.863597   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.863602   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.863628   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.863662   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.863712   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.863722   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.863630   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.863741   15893 addons.go:470] Verifying addon registry=true in "addons-412183"
	I0429 18:41:31.863714   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.863639   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.865661   15893 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-412183 service yakd-dashboard -n yakd-dashboard
	
	I0429 18:41:31.863664   15893 addons.go:470] Verifying addon metrics-server=true in "addons-412183"
	I0429 18:41:31.864051   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.864071   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.864071   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.864118   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.864524   15893 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0429 18:41:31.868446   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.868463   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.868478   15893 out.go:177] * Verifying registry addon...
	I0429 18:41:31.870416   15893 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0429 18:41:31.905112   15893 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0429 18:41:31.905155   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:31.914142   15893 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0429 18:41:31.914169   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:31.926881   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.926907   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.927186   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.927204   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:31.927209   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	W0429 18:41:31.927300   15893 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0429 18:41:31.943415   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:31.943434   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:31.943821   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:31.943833   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:31.943846   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:32.152366   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0429 18:41:32.386127   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:32.415620   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:32.593435   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.538531416s)
	I0429 18:41:32.593487   15893 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.816849105s)
	I0429 18:41:32.595121   15893 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0429 18:41:32.593487   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:32.596375   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:32.597495   15893 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0429 18:41:32.596671   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:32.598632   15893 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0429 18:41:32.598646   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0429 18:41:32.597524   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:32.598726   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:32.598742   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:32.596702   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:32.599081   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:32.599096   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:32.599119   15893 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-412183"
	I0429 18:41:32.600387   15893 out.go:177] * Verifying csi-hostpath-driver addon...
	I0429 18:41:32.602386   15893 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0429 18:41:32.619768   15893 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0429 18:41:32.619790   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:32.689963   15893 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0429 18:41:32.689983   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0429 18:41:32.703078   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:32.785280   15893 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 18:41:32.785302   15893 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0429 18:41:32.815514   15893 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0429 18:41:32.884880   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:32.885158   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:33.116203   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:33.378314   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:33.381800   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:33.609846   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:33.879052   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:33.879561   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:34.108343   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:34.386862   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:34.387006   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:34.596574   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.444160633s)
	I0429 18:41:34.596628   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:34.596641   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:34.596899   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:34.596919   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:34.596929   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:34.596936   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:34.597159   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:34.597204   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:34.597211   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:34.616987   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:34.809005   15893 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.993454332s)
	I0429 18:41:34.809051   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:34.809063   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:34.809388   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:34.809405   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:34.809427   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:34.809480   15893 main.go:141] libmachine: Making call to close driver server
	I0429 18:41:34.809505   15893 main.go:141] libmachine: (addons-412183) Calling .Close
	I0429 18:41:34.809771   15893 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:41:34.809823   15893 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:41:34.809792   15893 main.go:141] libmachine: (addons-412183) DBG | Closing plugin on server side
	I0429 18:41:34.811630   15893 addons.go:470] Verifying addon gcp-auth=true in "addons-412183"
	I0429 18:41:34.813722   15893 out.go:177] * Verifying gcp-auth addon...
	I0429 18:41:34.815639   15893 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0429 18:41:34.845113   15893 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0429 18:41:34.845139   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:34.888590   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:34.910594   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:35.116310   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:35.174427   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:35.325794   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:35.379249   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:35.379710   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:35.610944   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:35.819973   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:35.872912   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:35.875481   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:36.108705   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:36.319861   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:36.372606   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:36.376367   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:36.609295   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:36.822386   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:36.872563   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:36.876084   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:37.109466   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:37.320651   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:37.373401   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:37.374670   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:37.608612   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:37.667400   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:37.820198   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:37.874462   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:37.876983   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:38.108403   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:38.320108   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:38.373200   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:38.377576   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:38.608033   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:38.830396   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:38.882595   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:38.882754   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:39.108334   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:39.320238   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:39.374251   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:39.376285   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:39.628752   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:39.667870   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:39.822009   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:39.872673   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:39.876399   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:40.111373   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:40.329710   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:40.378545   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:40.383862   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:40.608516   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:40.820460   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:40.873263   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:40.875885   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:41.109185   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:41.319765   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:41.373535   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:41.376955   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:41.608485   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:41.819723   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:41.872984   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:41.876324   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:42.108383   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:42.168269   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:42.320186   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:42.374619   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:42.376550   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:42.608886   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:42.819924   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:42.872500   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:42.875796   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:43.109248   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:43.321505   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:43.374291   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:43.375664   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:43.608160   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:43.819734   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:43.873499   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:43.876243   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:44.137939   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:44.168380   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:44.320880   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:44.373324   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:44.375225   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:44.610217   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:44.819958   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:44.874598   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:44.877704   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:45.110330   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:45.320461   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:45.376075   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:45.382396   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:45.609287   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:45.819998   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:45.873620   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:45.875945   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:46.110654   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:46.173378   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:46.320147   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:46.374740   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:46.377708   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:46.609821   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:46.819360   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:46.873810   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:46.880959   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:47.109093   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:47.320414   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:47.374961   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:47.376267   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:47.609385   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:47.819465   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:47.875396   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:47.879909   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:48.109395   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:48.320437   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:48.712989   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:48.715484   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:48.716631   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:48.717716   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:48.831230   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:48.873144   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:48.876107   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:49.108972   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:49.320297   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:49.373287   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:49.376098   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:49.608911   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:49.819032   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:49.883145   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:49.883211   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:50.108364   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:50.319823   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:50.372610   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:50.375650   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:50.611469   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:51.201081   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:51.201610   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:51.205536   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:51.205574   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:51.206852   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:51.319588   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:51.377338   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:51.377395   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:51.608971   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:51.819757   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:51.873780   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:51.875290   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:52.113784   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:52.319874   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:52.373889   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:52.375669   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:52.609126   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:52.819744   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:52.873148   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:52.874636   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:53.108698   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:53.320209   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:53.373557   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:53.376251   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:53.608927   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:53.673375   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:53.819885   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:53.873418   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:53.876802   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:54.108482   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:54.320192   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:54.373321   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:54.376620   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:54.609693   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:54.820157   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:54.873978   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:54.876724   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:55.108595   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:55.319170   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:55.375499   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:55.377357   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:55.611225   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:55.820071   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:55.874336   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:55.887670   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:56.108718   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:56.170812   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:56.319124   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:56.373706   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:56.375954   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:56.609652   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:56.819905   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:56.873029   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:56.877833   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:57.109005   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:57.320153   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:57.384517   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:57.387771   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:57.608593   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:57.819563   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:57.874690   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:57.875182   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:58.109084   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:58.320431   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:58.374450   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:58.377976   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:58.608597   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:58.680754   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:41:58.820591   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:58.875721   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:58.876602   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:59.109625   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:59.330331   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:59.374858   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:59.376332   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:41:59.609368   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:41:59.820137   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:41:59.874020   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:41:59.875668   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:00.111412   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:00.321186   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:00.373492   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:00.380546   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:00.614160   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:00.820131   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:00.878827   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:00.879626   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:01.109100   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:01.168450   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:42:01.320124   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:01.374477   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:01.377514   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:01.613638   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:01.819614   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:01.873882   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:01.877285   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:02.108378   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:02.319134   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:02.373307   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:02.376965   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:02.608587   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:02.820396   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:02.877832   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:02.880875   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:03.108458   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:03.319320   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:03.374011   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:03.376558   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:04.063012   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:04.068729   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:04.078163   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:04.093598   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:04.097202   15893 pod_ready.go:102] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"False"
	I0429 18:42:04.110507   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:04.172134   15893 pod_ready.go:92] pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace has status "Ready":"True"
	I0429 18:42:04.172166   15893 pod_ready.go:81] duration metric: took 42.011497187s for pod "coredns-7db6d8ff4d-2xt85" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.172180   15893 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hx6q4" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.184492   15893 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-hx6q4" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-hx6q4" not found
	I0429 18:42:04.184518   15893 pod_ready.go:81] duration metric: took 12.331113ms for pod "coredns-7db6d8ff4d-hx6q4" in "kube-system" namespace to be "Ready" ...
	E0429 18:42:04.184528   15893 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-hx6q4" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-hx6q4" not found
	I0429 18:42:04.184536   15893 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.192453   15893 pod_ready.go:92] pod "etcd-addons-412183" in "kube-system" namespace has status "Ready":"True"
	I0429 18:42:04.192479   15893 pod_ready.go:81] duration metric: took 7.936712ms for pod "etcd-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.192488   15893 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.201460   15893 pod_ready.go:92] pod "kube-apiserver-addons-412183" in "kube-system" namespace has status "Ready":"True"
	I0429 18:42:04.201490   15893 pod_ready.go:81] duration metric: took 8.993998ms for pod "kube-apiserver-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.201502   15893 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.221988   15893 pod_ready.go:92] pod "kube-controller-manager-addons-412183" in "kube-system" namespace has status "Ready":"True"
	I0429 18:42:04.222011   15893 pod_ready.go:81] duration metric: took 20.501343ms for pod "kube-controller-manager-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.222021   15893 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xsvwz" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.319805   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:04.373446   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:04.376704   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:04.481317   15893 pod_ready.go:92] pod "kube-proxy-xsvwz" in "kube-system" namespace has status "Ready":"True"
	I0429 18:42:04.481346   15893 pod_ready.go:81] duration metric: took 259.317996ms for pod "kube-proxy-xsvwz" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.481361   15893 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.611975   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:04.820115   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:04.874235   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:04.876474   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:04.881534   15893 pod_ready.go:92] pod "kube-scheduler-addons-412183" in "kube-system" namespace has status "Ready":"True"
	I0429 18:42:04.881560   15893 pod_ready.go:81] duration metric: took 400.191017ms for pod "kube-scheduler-addons-412183" in "kube-system" namespace to be "Ready" ...
	I0429 18:42:04.881572   15893 pod_ready.go:38] duration metric: took 42.728010442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 18:42:04.881596   15893 api_server.go:52] waiting for apiserver process to appear ...
	I0429 18:42:04.881659   15893 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 18:42:04.918303   15893 api_server.go:72] duration metric: took 43.196146755s to wait for apiserver process to appear ...
	I0429 18:42:04.918332   15893 api_server.go:88] waiting for apiserver healthz status ...
	I0429 18:42:04.918363   15893 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0429 18:42:04.922691   15893 api_server.go:279] https://192.168.39.105:8443/healthz returned 200:
	ok
	I0429 18:42:04.923645   15893 api_server.go:141] control plane version: v1.30.0
	I0429 18:42:04.923670   15893 api_server.go:131] duration metric: took 5.331478ms to wait for apiserver health ...
	I0429 18:42:04.923680   15893 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 18:42:05.088592   15893 system_pods.go:59] 18 kube-system pods found
	I0429 18:42:05.088629   15893 system_pods.go:61] "coredns-7db6d8ff4d-2xt85" [ff070716-6e1d-4ac4-96c7-fa6eb4105594] Running
	I0429 18:42:05.088638   15893 system_pods.go:61] "csi-hostpath-attacher-0" [55526fb3-ae23-4b9e-a7e0-4a8b11e45754] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0429 18:42:05.088644   15893 system_pods.go:61] "csi-hostpath-resizer-0" [489ad110-3b06-480c-96f2-91d6b34e7be8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0429 18:42:05.088651   15893 system_pods.go:61] "csi-hostpathplugin-hgrqx" [2fc787b6-d8f6-4a9d-b816-98ddc0f65eab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0429 18:42:05.088656   15893 system_pods.go:61] "etcd-addons-412183" [8bc479ae-8648-452e-8244-8940efb5b98e] Running
	I0429 18:42:05.088662   15893 system_pods.go:61] "kube-apiserver-addons-412183" [6af7dd3d-3217-488e-96e5-d2597f1eb0e9] Running
	I0429 18:42:05.088665   15893 system_pods.go:61] "kube-controller-manager-addons-412183" [14d64bbb-9a33-4024-8064-8fbb67abc597] Running
	I0429 18:42:05.088669   15893 system_pods.go:61] "kube-ingress-dns-minikube" [3ea4da73-e176-41ea-be8d-a33571308b0c] Running
	I0429 18:42:05.088672   15893 system_pods.go:61] "kube-proxy-xsvwz" [c22033d6-3278-412b-8d58-ae73835285fd] Running
	I0429 18:42:05.088678   15893 system_pods.go:61] "kube-scheduler-addons-412183" [f032228f-858a-4f5a-a47c-9b8cd62a0593] Running
	I0429 18:42:05.088683   15893 system_pods.go:61] "metrics-server-c59844bb4-xbdnx" [0d97597b-550d-4b86-850f-8b839281a545] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 18:42:05.088693   15893 system_pods.go:61] "nvidia-device-plugin-daemonset-bdlx2" [ae8e59a0-c1bc-4229-a163-f1999243d24f] Running
	I0429 18:42:05.088699   15893 system_pods.go:61] "registry-proxy-fvvc6" [8835c731-1707-4dca-9621-b9f326ad0cd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0429 18:42:05.088704   15893 system_pods.go:61] "registry-vkwz2" [cbb1f320-7afd-403e-96b8-4e34ed9b2d78] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0429 18:42:05.088714   15893 system_pods.go:61] "snapshot-controller-745499f584-gmgpd" [d4da05e7-824f-4178-91fc-a8d9d9f5e065] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 18:42:05.088721   15893 system_pods.go:61] "snapshot-controller-745499f584-wfndt" [fc88fee2-c59d-4e4f-a33c-347f6c34fcbb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 18:42:05.088728   15893 system_pods.go:61] "storage-provisioner" [b4e8e367-62f5-4063-8cd9-523506a10609] Running
	I0429 18:42:05.088733   15893 system_pods.go:61] "tiller-deploy-6677d64bcd-424j5" [d9343705-996d-40f7-9597-aba3801d8af1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0429 18:42:05.088741   15893 system_pods.go:74] duration metric: took 165.050346ms to wait for pod list to return data ...
	I0429 18:42:05.088749   15893 default_sa.go:34] waiting for default service account to be created ...
	I0429 18:42:05.108801   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:05.281951   15893 default_sa.go:45] found service account: "default"
	I0429 18:42:05.281985   15893 default_sa.go:55] duration metric: took 193.227143ms for default service account to be created ...
	I0429 18:42:05.282001   15893 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 18:42:05.321336   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:05.376405   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:05.378396   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:05.488579   15893 system_pods.go:86] 18 kube-system pods found
	I0429 18:42:05.488610   15893 system_pods.go:89] "coredns-7db6d8ff4d-2xt85" [ff070716-6e1d-4ac4-96c7-fa6eb4105594] Running
	I0429 18:42:05.488618   15893 system_pods.go:89] "csi-hostpath-attacher-0" [55526fb3-ae23-4b9e-a7e0-4a8b11e45754] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0429 18:42:05.488625   15893 system_pods.go:89] "csi-hostpath-resizer-0" [489ad110-3b06-480c-96f2-91d6b34e7be8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0429 18:42:05.488632   15893 system_pods.go:89] "csi-hostpathplugin-hgrqx" [2fc787b6-d8f6-4a9d-b816-98ddc0f65eab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0429 18:42:05.488639   15893 system_pods.go:89] "etcd-addons-412183" [8bc479ae-8648-452e-8244-8940efb5b98e] Running
	I0429 18:42:05.488645   15893 system_pods.go:89] "kube-apiserver-addons-412183" [6af7dd3d-3217-488e-96e5-d2597f1eb0e9] Running
	I0429 18:42:05.488652   15893 system_pods.go:89] "kube-controller-manager-addons-412183" [14d64bbb-9a33-4024-8064-8fbb67abc597] Running
	I0429 18:42:05.488659   15893 system_pods.go:89] "kube-ingress-dns-minikube" [3ea4da73-e176-41ea-be8d-a33571308b0c] Running
	I0429 18:42:05.488670   15893 system_pods.go:89] "kube-proxy-xsvwz" [c22033d6-3278-412b-8d58-ae73835285fd] Running
	I0429 18:42:05.488676   15893 system_pods.go:89] "kube-scheduler-addons-412183" [f032228f-858a-4f5a-a47c-9b8cd62a0593] Running
	I0429 18:42:05.488690   15893 system_pods.go:89] "metrics-server-c59844bb4-xbdnx" [0d97597b-550d-4b86-850f-8b839281a545] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 18:42:05.488697   15893 system_pods.go:89] "nvidia-device-plugin-daemonset-bdlx2" [ae8e59a0-c1bc-4229-a163-f1999243d24f] Running
	I0429 18:42:05.488703   15893 system_pods.go:89] "registry-proxy-fvvc6" [8835c731-1707-4dca-9621-b9f326ad0cd2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0429 18:42:05.488711   15893 system_pods.go:89] "registry-vkwz2" [cbb1f320-7afd-403e-96b8-4e34ed9b2d78] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0429 18:42:05.488717   15893 system_pods.go:89] "snapshot-controller-745499f584-gmgpd" [d4da05e7-824f-4178-91fc-a8d9d9f5e065] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 18:42:05.488723   15893 system_pods.go:89] "snapshot-controller-745499f584-wfndt" [fc88fee2-c59d-4e4f-a33c-347f6c34fcbb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0429 18:42:05.488727   15893 system_pods.go:89] "storage-provisioner" [b4e8e367-62f5-4063-8cd9-523506a10609] Running
	I0429 18:42:05.488733   15893 system_pods.go:89] "tiller-deploy-6677d64bcd-424j5" [d9343705-996d-40f7-9597-aba3801d8af1] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0429 18:42:05.488740   15893 system_pods.go:126] duration metric: took 206.730841ms to wait for k8s-apps to be running ...
	I0429 18:42:05.488747   15893 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 18:42:05.488799   15893 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 18:42:05.531801   15893 system_svc.go:56] duration metric: took 43.04686ms WaitForService to wait for kubelet
	I0429 18:42:05.531843   15893 kubeadm.go:576] duration metric: took 43.809691823s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 18:42:05.531869   15893 node_conditions.go:102] verifying NodePressure condition ...
	I0429 18:42:05.610186   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:05.683860   15893 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 18:42:05.683890   15893 node_conditions.go:123] node cpu capacity is 2
	I0429 18:42:05.683902   15893 node_conditions.go:105] duration metric: took 152.029356ms to run NodePressure ...
	I0429 18:42:05.683914   15893 start.go:240] waiting for startup goroutines ...
	I0429 18:42:05.820619   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:05.875650   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:05.875970   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:06.108856   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:06.321187   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:06.374049   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:06.377237   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:06.615999   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:06.820236   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:06.873988   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:06.876131   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:07.108456   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:07.321011   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:07.373736   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:07.376004   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:07.608641   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:07.819572   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:07.873988   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:07.874790   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:08.113966   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:08.320232   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:08.374410   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:08.379774   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:08.608282   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:08.820036   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:08.873448   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:08.876877   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:09.108499   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:09.320986   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:09.376651   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:09.378992   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:09.609695   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:09.827120   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:09.873491   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:09.879377   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:10.109552   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:10.320707   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:10.373516   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:10.375920   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:10.609434   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:10.821201   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:10.874310   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:10.877710   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:11.110178   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:11.320692   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:11.373412   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:11.378017   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:11.608894   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:11.819844   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:11.874570   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:11.876158   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:12.109961   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:12.320457   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:12.376047   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:12.376684   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:12.611929   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:12.819738   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:12.879350   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:12.883361   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:13.109916   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:13.319233   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:13.380085   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:13.382373   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:13.609784   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:13.820310   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:13.879895   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:13.881343   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:14.109710   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:14.320062   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:14.374056   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:14.375213   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:14.614982   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:14.820562   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:14.879273   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:14.879453   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:15.108951   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:15.326827   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:15.379677   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:15.380083   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:15.610635   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:15.820916   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:15.873979   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:15.878653   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:16.109114   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:16.320650   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:16.373764   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:16.375391   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:16.609808   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:16.820018   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:16.873310   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:16.876372   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:17.109032   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:17.320272   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:17.377505   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:17.378702   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:17.611550   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:17.819699   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:17.880584   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:17.880895   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:18.109239   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:18.323426   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:18.373828   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:18.376453   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:18.613136   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:18.820698   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:18.873344   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:18.879195   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:19.109839   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:19.320098   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:19.373927   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:19.378920   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:19.610004   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:19.820126   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:19.879219   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:19.883046   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:20.109318   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:20.319710   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:20.372647   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:20.376390   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:20.610209   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:21.059879   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:21.061861   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:21.064271   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:21.109434   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:21.320502   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:21.376257   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:21.377651   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:21.609980   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:21.819798   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:21.874184   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:21.878380   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:22.109303   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:22.319667   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:22.373303   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:22.375720   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:22.610607   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:22.819749   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:22.877238   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:22.877439   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:23.110671   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:23.320153   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:23.373551   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:23.376052   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:23.609176   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:23.822500   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:23.878698   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0429 18:42:23.879077   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:24.109071   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:24.320369   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:24.373413   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:24.376775   15893 kapi.go:107] duration metric: took 52.506357023s to wait for kubernetes.io/minikube-addons=registry ...
	I0429 18:42:24.609432   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:24.820180   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:24.874945   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:25.122211   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:25.320237   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:25.374370   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:25.609525   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:25.820344   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:25.874218   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:26.111568   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:26.320003   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:26.373253   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:26.610615   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:26.819944   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:26.874490   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:27.108643   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:27.320232   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:27.373969   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:27.609764   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:27.820535   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:27.875572   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:28.110508   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:28.320137   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:28.373592   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:28.609621   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:28.820797   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:28.874709   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:29.116073   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:29.498339   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:29.499468   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:29.609216   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:29.819885   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:29.872933   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:30.108448   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:30.319844   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:30.373004   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:30.610569   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:30.819214   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:30.873868   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:31.108600   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:31.320264   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:31.373519   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:31.778520   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:31.820133   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:31.873766   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:32.110403   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:32.319923   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:32.374027   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:32.609685   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:32.823506   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:32.875338   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:33.111249   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:33.319314   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:33.374435   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:33.609148   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:33.822750   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:33.874056   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:34.111432   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:34.318855   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:34.372734   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:34.610570   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:34.820089   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:34.873342   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:35.108972   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:35.320253   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:35.373137   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:35.609786   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:35.819792   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:35.875787   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:36.117115   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:36.326164   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:36.376089   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:36.610014   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:36.820407   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:36.874011   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:37.120166   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:37.320500   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:37.373315   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:37.608359   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:37.819557   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:37.875384   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:38.110662   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:38.323731   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:38.373208   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:38.609197   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:38.820787   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:38.873867   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:39.115772   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:39.319494   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:39.374752   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:39.609111   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:39.820207   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:39.873989   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:40.107814   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:40.319942   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:40.373468   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:40.620227   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:40.819905   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:40.874850   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:41.110768   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:41.319429   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:41.373871   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:41.609730   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:41.820835   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:41.880857   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:42.109040   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:42.319592   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:42.375517   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:42.609605   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:42.819188   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:42.874197   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:43.111361   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:43.319058   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:43.376510   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:43.609183   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:43.820823   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:43.883253   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:44.108606   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:44.320270   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:44.382397   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:44.614692   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:44.820344   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:44.883416   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:45.108892   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:45.319041   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:45.375167   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:45.609745   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:45.820541   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:45.874576   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:46.109109   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:46.320787   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:46.374696   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:46.610511   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:46.822449   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:46.875865   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:47.108868   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:47.320464   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:47.374646   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:47.625984   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:47.820037   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:47.874355   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:48.108926   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:48.319709   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:48.374214   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:48.610743   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:48.820625   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:48.874899   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:49.527470   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:49.527528   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:49.529401   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:49.607985   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:49.819750   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:49.876537   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:50.116799   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:50.319118   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:50.373160   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:50.608237   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:50.820588   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:50.874827   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:51.112295   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:51.321371   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:51.377896   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:51.607997   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0429 18:42:51.819291   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:51.875630   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:52.109047   15893 kapi.go:107] duration metric: took 1m19.506656648s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0429 18:42:52.320150   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:52.376103   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:52.821968   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:52.875586   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:53.320455   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:53.374295   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:53.820083   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:53.876342   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:54.321485   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:54.376558   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:54.820417   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:54.875369   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:55.319743   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:55.373135   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:55.819911   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:55.873258   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:56.319772   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:56.373531   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:56.819910   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:56.874423   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:57.319059   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:57.373559   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:57.819961   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:57.874481   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:58.319912   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:58.374001   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:58.819175   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:58.875865   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:59.319206   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:59.374595   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:42:59.820104   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:42:59.874755   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:00.319877   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:00.373337   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:00.819582   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:00.875737   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:01.319802   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:01.373031   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:01.819995   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:01.874533   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:02.319260   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:02.373559   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:02.820593   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:02.873851   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:03.319967   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:03.372906   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:03.819082   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:03.878013   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:04.319785   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:04.372707   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:04.820457   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:04.875576   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:05.320043   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:05.373304   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:05.820389   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:05.874968   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:06.319544   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:06.378267   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:06.820628   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:06.873196   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:07.319880   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:07.373113   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:07.819329   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:07.875388   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:08.320573   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:08.373855   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:08.820239   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:08.876090   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:09.319297   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:09.373151   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:09.819637   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:09.874599   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:10.320795   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:10.373131   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:10.819797   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:10.874120   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:11.320656   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:11.374527   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:11.819552   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:11.873767   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:12.319673   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:12.374450   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:12.819904   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:12.873764   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:13.320406   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:13.373778   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:13.820170   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:13.876108   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:14.320520   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:14.374287   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:14.819579   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:14.873493   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:15.320041   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:15.373468   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:15.819939   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:15.876553   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:16.319766   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:16.373308   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:16.820281   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:16.873375   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:17.320850   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:17.373372   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:17.819659   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:17.875307   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:18.321259   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:18.373735   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:18.820405   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:18.874455   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:19.320394   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:19.374212   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:19.819691   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:19.874410   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:20.319316   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:20.374332   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:20.819649   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:20.875092   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:21.320257   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:21.373704   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:21.819803   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:21.875422   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:22.318737   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:22.373045   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:22.819990   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:22.873873   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:23.320145   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:23.373578   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:23.821355   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:23.875818   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:24.321083   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:24.374178   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:24.819215   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:24.875319   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:25.319686   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:25.376024   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:25.819477   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:25.873687   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:26.320543   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:26.374494   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:26.820507   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:26.873407   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:27.323027   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:27.373559   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:27.825718   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:27.874676   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:28.320483   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:28.373696   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:28.820231   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:28.873452   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:29.323207   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:29.373796   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:29.820809   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:29.872751   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:30.320418   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:30.374226   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:30.820193   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:30.874685   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:31.320601   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:31.373914   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:31.820225   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:31.876561   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:32.319638   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:32.375030   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:32.819689   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:32.874627   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:33.319722   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:33.373593   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:33.821274   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:33.873990   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:34.320049   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:34.373046   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:34.819007   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:34.883671   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:35.319871   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:35.373202   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:35.819642   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:35.883366   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:36.324986   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:36.373733   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:36.819605   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:36.873038   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:37.320989   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:37.373455   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:37.819659   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:37.874293   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:38.319416   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:38.373926   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:38.819249   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:38.875393   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:39.320403   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:39.373828   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:39.820073   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:39.873655   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:40.321369   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:40.373639   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:40.819869   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:40.874906   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:41.320417   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:41.374235   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:41.820466   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:41.873696   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:42.319974   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:42.373252   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:42.819928   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:42.873777   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:43.320389   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:43.374188   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:43.819476   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:43.874141   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:44.319462   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:44.374207   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:44.819251   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:44.878598   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:45.319713   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:45.374367   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:45.819515   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:45.876972   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:46.319711   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:46.373544   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:46.820227   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:46.875753   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:47.320069   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:47.373262   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:47.819415   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:47.873611   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:48.319899   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:48.373258   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:48.819697   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:48.875877   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:49.319715   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:49.373138   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:49.819754   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:49.872750   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:50.320174   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:50.375535   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:50.823862   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:50.878775   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:51.322016   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:51.374380   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:51.820918   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:51.876605   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:52.320850   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:52.376116   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:52.820457   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:52.874567   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:53.319610   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:53.376072   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:53.819362   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:53.873783   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:54.320847   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:54.787695   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:54.822470   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:54.875157   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:55.319503   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:55.373812   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:55.820655   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:55.878154   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:56.322038   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:56.372849   15893 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0429 18:43:56.818785   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:56.873434   15893 kapi.go:107] duration metric: took 2m25.008908375s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0429 18:43:57.320310   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:57.820784   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:58.319912   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:58.821453   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:59.321002   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:43:59.822238   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:44:00.321091   15893 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0429 18:44:00.819883   15893 kapi.go:107] duration metric: took 2m26.004241214s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0429 18:44:00.821713   15893 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-412183 cluster.
	I0429 18:44:00.823120   15893 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0429 18:44:00.824635   15893 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0429 18:44:00.826134   15893 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, helm-tiller, inspektor-gadget, yakd, metrics-server, storage-provisioner, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0429 18:44:00.827542   15893 addons.go:505] duration metric: took 2m39.105496165s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns helm-tiller inspektor-gadget yakd metrics-server storage-provisioner default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0429 18:44:00.827601   15893 start.go:245] waiting for cluster config update ...
	I0429 18:44:00.827623   15893 start.go:254] writing updated cluster config ...
	I0429 18:44:00.828011   15893 ssh_runner.go:195] Run: rm -f paused
	I0429 18:44:00.882205   15893 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 18:44:00.883921   15893 out.go:177] * Done! kubectl is now configured to use "addons-412183" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.187541012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46f4e0db-e14b-4604-8a4f-190c58c83c88 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.187595715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46f4e0db-e14b-4604-8a4f-190c58c83c88 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.187937995Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fc8fc0b63ef314b42836b03a887a711953f39d3b92053f68e6fc31c7a287c7b3,PodSandboxId:51d4271a95c5c93f043ffd53993f99a35cd05847e00cf95346b86eb88cd06cb1,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1714416404687965737,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-58mmg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28ba31f3-909c-45c3-ba1f-bb5679486b41,},Annotations:map[string]string{io.kubernetes.container.hash: edfe22b7,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2d302338a160a0e2c150527899fa208a7976b4eaa8335b15399f4e981686bb,PodSandboxId:139f0e46199808b9891e53c28a9bc5d0efd19b2f447ce7f0338145450a919bb3,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1714416310243355817,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-58zjw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 37a072c8-8aaf-4735-86a9-4bd44444005d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6782f344,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5148da366d607322c5acf6fedaa54eeec81d5901a47a2c19bf640ea2132d12d7,PodSandboxId:54919015a0b0c60cd9437e254530ddd076856ae9494816f022588350e1090b11,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499,State:CONTAINER_RUNNING,CreatedAt:1714416262309136542,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: bbcc8ec6-e9cc-473d-8d5e-e5fabf60cc5e,},Annotations:map[string]string{io.kubernetes.container.hash: b29b96b5,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658b8decf43f7b00b5234119193d5379dafa508b2458ebc721dcbcdd268dc60,PodSandboxId:42228df064aa40b4efadd0b3002091eae5a8f4ad80fa222c4243bf00a3935213,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1714416240403138805,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-g9vlr,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c2859801-1eca-4ac0-9612-7f83c77ac4d6,},Annotations:map[string]string{io.kubernetes.container.hash: 8ebd5ced,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29723a57198ecb6736aa46b90064c6d235583f82f8f570b523eb08d0fc9c53e7,PodSandboxId:2adc934dc9526163c736dae324927ab16d96df4ee16cbeb32cdea6b93a9c0ad8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17144
16151898083483,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-5b87k,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 695334d7-ed81-4e1f-8805-0b308e61e51f,},Annotations:map[string]string{io.kubernetes.container.hash: a9e22ea9,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c8d385880f89883295a4a6bd71b431cb52dfdbab3f2fc249602b54b0b18a4d9,PodSandboxId:8ccb7691db8a7c30d09ea0aad607c7aa095c9547643edb35abd769fefe06e70b,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1714416145971512333,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-7cpwq,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 734d3fd3-045b-43dd-924f-cd2d77eadbcc,},Annotations:map[string]string{io.kubernetes.container.hash: 20f9d009,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40c213a21bee0d4a0530b8d7edb51ab11bf02b947a1dc38debbe72ba2c3eea16,PodSandboxId:98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e41
2e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1714416132204054372,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xbdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d97597b-550d-4b86-850f-8b839281a545,},Annotations:map[string]string{io.kubernetes.container.hash: 8c871209,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6819fcea7b4fad8d8d7adc770f2b04a66dfcf100f35d5fb0f6b52e3f25813d9,PodSandboxId:447c6a6c57fc59a9672fe90628912c8fb80bee863c9ac746dbb2b04dab7add28,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714416089733328568,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4e8e367-62f5-4063-8cd9-523506a10609,},Annotations:map[string]string{io.kubernetes.container.hash: a6977079,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0127dd97a03df877cc50b862b3f419eeb59f37a3f2b4bbdf4546bdee290cf25e,PodSandboxId:f21508223bf35ac04d378b218daf78ad13d443b1015b39afa352254b001e007f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Ima
ge:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714416086952837807,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2xt85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff070716-6e1d-4ac4-96c7-fa6eb4105594,},Annotations:map[string]string{io.kubernetes.container.hash: 7ac317e9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4a2
3aee1a21bdea7a870774c664b1a6554a1007827af182017169b776d8cf3c,PodSandboxId:d310d206473956f54252da5c679be0f0455eec0cd467eac8a99d9c56bf39d7db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714416084705178195,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsvwz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c22033d6-3278-412b-8d58-ae73835285fd,},Annotations:map[string]string{io.kubernetes.container.hash: 44a5643,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8edae0c7e7e7b7865168e4f5d3654e0ac9e8c6
27d1323178a1618794e43e7b44,PodSandboxId:cb6eff154dc00721778dfa345091adf361662564d0de27891c624788828ec11c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714416062319612387,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1f0134674e28304dc7ff0337d3566c1,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ddb04a35645e46136c0d21b3330
787d487d92ccfbc96de7a34f04aee8385685,PodSandboxId:2b92bcdf43a456f3a5d7d8a9384336aa649a20849ba17b0d1cee589273de4a91,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714416062272510162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7f2d54e973228a7084cd2d7f18eb35,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2791682e5b0aa0ce3e2020d5d6d2965aef373a33d2fa
b67a9a1c11ef1f17085,PodSandboxId:bb671963d098e05dbbc03ab8a2039ddfb0fd806f87fc325539c25e3bd2ddcca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714416062286547518,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f06ca88a53653b148fbde08ae3cd69e,},Annotations:map[string]string{io.kubernetes.container.hash: 6f22ab8d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a28762184ca2929c27f2b4bee83875934d812823e05b56c5aab7c46ae6b05b
2e,PodSandboxId:643dac2625f91a2b78443fb3f732db640f96bfa8bd66b7fa05e3fc8bc4371606,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714416062189058235,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-412183,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb0226a80bbce0f771b472c76b0984d9,},Annotations:map[string]string{io.kubernetes.container.hash: bf541bbc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46f4e0db-e14b-4604-8a4f-190c58c83c88 name=/runtime.v1.RuntimeService/ListC
ontainers
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.204454329Z" level=debug msg="Unmounted container 98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2" file="storage/runtime.go:495" id=aae1c710-ad36-405c-98bb-3e680541f5a0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.211358636Z" level=debug msg="Event: WRITE         \"/var/run/crio/exits/98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2.CYTVM2\"" file="server/server.go:805"
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.211435855Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2.CYTVM2\"" file="server/server.go:805"
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.211463668Z" level=debug msg="Container or sandbox exited: 98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2.CYTVM2" file="server/server.go:810"
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.211498560Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2\"" file="server/server.go:805"
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.211516737Z" level=debug msg="Container or sandbox exited: 98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2" file="server/server.go:810"
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.211538361Z" level=debug msg="sandbox infra exited and found: 98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2" file="server/server.go:825"
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.211566744Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2.CYTVM2\"" file="server/server.go:805"
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.227309232Z" level=debug msg="Found exit code for 98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2: 0" file="oci/runtime_oci.go:1022" id=aae1c710-ad36-405c-98bb-3e680541f5a0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.227478257Z" level=debug msg="Skipping status update for: &{State:{Version:1.0.2-dev ID:98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2 Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.name:POD io.kubernetes.cri-o.Annotations:{\"kubernetes.io/config.seen\":\"2024-04-29T18:41:27.287959659Z\",\"kubernetes.io/config.source\":\"api\"} io.kubernetes.cri-o.CNIResult:{\"cniVersion\":\"1.0.0\",\"interfaces\":[{\"name\":\"bridge\",\"mac\":\"2e:a3:be:9a:4b:35\"},{\"name\":\"veth04668a39\",\"mac\":\"be:85:50:2c:75:db\"},{\"name\":\"eth0\",\"mac\":\"1e:ad:c9:cf:48:b2\",\"sandbox\":\"/var/run/netns/aa5600cf-c6a7-40ca-b155-2a2b3f3f464f\"}],\"ips\":[{\"interface\":2,\"address\":\"10.244.0.8/16\",\"gateway\":\"10.244.0.1\"}],\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.244
.0.1\"}],\"dns\":{}} io.kubernetes.cri-o.CgroupParent:/kubepods/burstable/pod0d97597b-550d-4b86-850f-8b839281a545 io.kubernetes.cri-o.ContainerID:98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2 io.kubernetes.cri-o.ContainerName:k8s_POD_metrics-server-c59844bb4-xbdnx_kube-system_0d97597b-550d-4b86-850f-8b839281a545_0 io.kubernetes.cri-o.ContainerType:sandbox io.kubernetes.cri-o.Created:2024-04-29T18:41:27.954621755Z io.kubernetes.cri-o.HostName:metrics-server-c59844bb4-xbdnx io.kubernetes.cri-o.HostNetwork:false io.kubernetes.cri-o.HostnamePath:/var/run/containers/storage/overlay-containers/98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2/userdata/hostname io.kubernetes.cri-o.Image:registry.k8s.io/pause:3.9 io.kubernetes.cri-o.ImageName:registry.k8s.io/pause:3.9 io.kubernetes.cri-o.KubeName:metrics-server-c59844bb4-xbdnx io.kubernetes.cri-o.Labels:{\"pod-template-hash\":\"c59844bb4\",\"k8s-app\":\"metrics-server\",\"io.kubernetes.pod.uid\":\"0d97597b-550d-4b86-850f-8b839281a545
\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"metrics-server-c59844bb4-xbdnx\",\"io.kubernetes.container.name\":\"POD\"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-xbdnx_0d97597b-550d-4b86-850f-8b839281a545/98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2.log io.kubernetes.cri-o.Metadata:{\"name\":\"metrics-server-c59844bb4-xbdnx\",\"uid\":\"0d97597b-550d-4b86-850f-8b839281a545\",\"namespace\":\"kube-system\"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/4de54f125d5ab6ad7666b2b188859380f110b663648464d5d85d7a2914721fd5/merged io.kubernetes.cri-o.Name:k8s_metrics-server-c59844bb4-xbdnx_kube-system_0d97597b-550d-4b86-850f-8b839281a545_0 io.kubernetes.cri-o.Namespace:kube-system io.kubernetes.cri-o.NamespaceOptions:{\"pid\":1} io.kubernetes.cri-o.PodLinuxOverhead:{} io.kubernetes.cri-o.PodLinuxResources:{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}} io.kubernetes.cri-o.P
ortMappings:[] io.kubernetes.cri-o.PrivilegedRuntime:false io.kubernetes.cri-o.ResolvPath:/var/run/containers/storage/overlay-containers/98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2/userdata/resolv.conf io.kubernetes.cri-o.RuntimeHandler: io.kubernetes.cri-o.SandboxID:98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2 io.kubernetes.cri-o.SandboxName:k8s_metrics-server-c59844bb4-xbdnx_kube-system_0d97597b-550d-4b86-850f-8b839281a545_0 io.kubernetes.cri-o.SeccompProfilePath:RuntimeDefault io.kubernetes.cri-o.ShmPath:/var/run/containers/storage/overlay-containers/98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2/userdata/shm io.kubernetes.pod.name:metrics-server-c59844bb4-xbdnx io.kubernetes.pod.namespace:kube-system io.kubernetes.pod.uid:0d97597b-550d-4b86-850f-8b839281a545 k8s-app:metrics-server kubernetes.io/config.seen:2024-04-29T18:41:27.287959659Z kubernetes.io/config.source:api pod-template-hash:c59844bb4]} Created:2024-04-29 18:41:30.379359201 +0000 UTC St
arted:2024-04-29 18:41:30.546502065 +0000 UTC m=+39.640230979 Finished:2024-04-29 18:49:37.210751762 +0000 UTC ExitCode:0xc000fadcd0 OOMKilled:false SeccompKilled:false Error: InitPid:2858 InitStartTime:6379 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}" file="oci/runtime_oci.go:946"
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.231363225Z" level=debug msg="Event: REMOVE        \"/var/run/crio/exits/98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2\"" file="server/server.go:805"
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.232224987Z" level=info msg="Stopped pod sandbox: 98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2" file="server/sandbox_stop_linux.go:91" id=aae1c710-ad36-405c-98bb-3e680541f5a0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.249183839Z" level=debug msg="Response: &StopPodSandboxResponse{}" file="otel-collector/interceptors.go:74" id=aae1c710-ad36-405c-98bb-3e680541f5a0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.253285628Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 0d97597b-550d-4b86-850f-8b839281a545,},},}" file="otel-collector/interceptors.go:62" id=e34b968d-4b6e-456c-b173-be7c93987ac2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.253470806Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-xbdnx,Uid:0d97597b-550d-4b86-850f-8b839281a545,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714416087954621755,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-xbdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d97597b-550d-4b86-850f-8b839281a545,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T18:41:27.287959659Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e34b968d-4b6e-456c-b173-be7c93987ac2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.254114448Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8b013cd2-0a6c-4637-9072-ab5bd982b7f2 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.254269932Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-xbdnx,Uid:0d97597b-550d-4b86-850f-8b839281a545,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714416087954621755,Network:&PodSandboxNetworkStatus{Ip:10.244.0.8,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-c59844bb4-xbdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d97597b-550d-4b86-850f-8b839281a545,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T18:41:27.287959659Z,kubernetes.io/config.
source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=8b013cd2-0a6c-4637-9072-ab5bd982b7f2 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.254730985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 0d97597b-550d-4b86-850f-8b839281a545,},},}" file="otel-collector/interceptors.go:62" id=27286676-d97e-4c9a-81f8-c70601d33159 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.255205622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27286676-d97e-4c9a-81f8-c70601d33159 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.255313798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:40c213a21bee0d4a0530b8d7edb51ab11bf02b947a1dc38debbe72ba2c3eea16,PodSandboxId:98b14ddadef48a063ea40fde397b7ea6b2d50813701e55cc4c614225ad1cacc2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_EXITED,CreatedAt:1714416132204054372,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xbdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d97597b-550d-4b86-850f-8b839281a545,},Annotations:map[string]string{io.kubernetes.container.hash: 8c871209,io.kubern
etes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27286676-d97e-4c9a-81f8-c70601d33159 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.256212304Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:40c213a21bee0d4a0530b8d7edb51ab11bf02b947a1dc38debbe72ba2c3eea16,Verbose:false,}" file="otel-collector/interceptors.go:62" id=3457389b-37d5-4194-8dcf-da56133f8a67 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 29 18:49:37 addons-412183 crio[688]: time="2024-04-29 18:49:37.256345693Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:40c213a21bee0d4a0530b8d7edb51ab11bf02b947a1dc38debbe72ba2c3eea16,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1714416132264462012,StartedAt:1714416132295895142,FinishedAt:1714416577025739527,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-xbdnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d97597b-550d-4b86-850f-8b839281a545,},Annotations:map[string]string{io.kubernetes.container.hash: 8c871209
,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/tmp,HostPath:/var/lib/kubelet/pods/0d97597b-550d-4b86-850f-8b839281a545/volumes/kubernetes.io~empty-dir/tmp-dir,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0d97597b-550d-4b86-850f-8b839281a545/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0d97597b-550d-4b86-850f-8b839281a545/containers/metrics-server/8139644c,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_P
RIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/0d97597b-550d-4b86-850f-8b839281a545/volumes/kubernetes.io~projected/kube-api-access-tcv59,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-xbdnx_0d97597b-550d-4b86-850f-8b839281a545/metrics-server/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:948,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=3457389b-37d5-4194-8dcf-da56133f8a67 name=/runtime.v1.RuntimeService/ContainerStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fc8fc0b63ef31       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 2 minutes ago       Running             hello-world-app           0                   51d4271a95c5c       hello-world-app-86c47465fc-58mmg
	1c2d302338a16       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                   4 minutes ago       Running             headlamp                  0                   139f0e4619980       headlamp-7559bf459f-58zjw
	5148da366d607       docker.io/library/nginx@sha256:dd524baac105f5353429a7022c26a02c8c80d95a50cb4d34b6e19a3a4289ff88                         5 minutes ago       Running             nginx                     0                   54919015a0b0c       nginx
	8658b8decf43f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            5 minutes ago       Running             gcp-auth                  0                   42228df064aa4       gcp-auth-5db96cd9b4-g9vlr
	29723a57198ec       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         7 minutes ago       Running             yakd                      0                   2adc934dc9526       yakd-dashboard-5ddbf7d777-5b87k
	8c8d385880f89       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   8ccb7691db8a7       local-path-provisioner-8d985888d-7cpwq
	40c213a21bee0       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Exited              metrics-server            0                   98b14ddadef48       metrics-server-c59844bb4-xbdnx
	d6819fcea7b4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   447c6a6c57fc5       storage-provisioner
	0127dd97a03df       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   f21508223bf35       coredns-7db6d8ff4d-2xt85
	c4a23aee1a21b       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                        8 minutes ago       Running             kube-proxy                0                   d310d20647395       kube-proxy-xsvwz
	8edae0c7e7e7b       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                        8 minutes ago       Running             kube-controller-manager   0                   cb6eff154dc00       kube-controller-manager-addons-412183
	a2791682e5b0a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                        8 minutes ago       Running             kube-apiserver            0                   bb671963d098e       kube-apiserver-addons-412183
	7ddb04a35645e       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                        8 minutes ago       Running             kube-scheduler            0                   2b92bcdf43a45       kube-scheduler-addons-412183
	a28762184ca29       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   643dac2625f91       etcd-addons-412183
	
	
	==> coredns [0127dd97a03df877cc50b862b3f419eeb59f37a3f2b4bbdf4546bdee290cf25e] <==
	[INFO] 10.244.0.7:40337 - 2282 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000216737s
	[INFO] 10.244.0.7:39363 - 19186 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079375s
	[INFO] 10.244.0.7:39363 - 44788 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128334s
	[INFO] 10.244.0.7:50589 - 3994 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070298s
	[INFO] 10.244.0.7:50589 - 56728 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000168587s
	[INFO] 10.244.0.7:45661 - 52302 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096663s
	[INFO] 10.244.0.7:45661 - 55628 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000171665s
	[INFO] 10.244.0.7:34103 - 56710 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000058336s
	[INFO] 10.244.0.7:34103 - 44165 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00022743s
	[INFO] 10.244.0.7:42542 - 20273 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000043826s
	[INFO] 10.244.0.7:42542 - 45630 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090902s
	[INFO] 10.244.0.7:53550 - 42476 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043078s
	[INFO] 10.244.0.7:53550 - 52946 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000159656s
	[INFO] 10.244.0.7:42370 - 63616 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000043076s
	[INFO] 10.244.0.7:42370 - 38786 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000143879s
	[INFO] 10.244.0.22:46040 - 4687 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000646984s
	[INFO] 10.244.0.22:60142 - 56142 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000902713s
	[INFO] 10.244.0.22:53676 - 17323 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107731s
	[INFO] 10.244.0.22:57045 - 18119 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161242s
	[INFO] 10.244.0.22:38046 - 50695 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120569s
	[INFO] 10.244.0.22:45952 - 10907 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123567s
	[INFO] 10.244.0.22:50073 - 54763 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00379794s
	[INFO] 10.244.0.22:36038 - 33275 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.004189416s
	[INFO] 10.244.0.24:33327 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000462441s
	[INFO] 10.244.0.24:41500 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00019273s
	
	
	==> describe nodes <==
	Name:               addons-412183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-412183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=addons-412183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T18_41_08_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-412183
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 18:41:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-412183
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 18:49:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 18:47:16 +0000   Mon, 29 Apr 2024 18:41:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 18:47:16 +0000   Mon, 29 Apr 2024 18:41:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 18:47:16 +0000   Mon, 29 Apr 2024 18:41:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 18:47:16 +0000   Mon, 29 Apr 2024 18:41:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    addons-412183
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 07b4da41a2a64d4fb0e81387a882105f
	  System UUID:                07b4da41-a2a6-4d4f-b0e8-1387a882105f
	  Boot ID:                    bb7d8d4f-bdf5-45de-b22b-79a308c9af93
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-58mmg          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  gcp-auth                    gcp-auth-5db96cd9b4-g9vlr                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  headlamp                    headlamp-7559bf459f-58zjw                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 coredns-7db6d8ff4d-2xt85                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m16s
	  kube-system                 etcd-addons-412183                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-apiserver-addons-412183              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-controller-manager-addons-412183     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-proxy-xsvwz                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-scheduler-addons-412183              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  local-path-storage          local-path-provisioner-8d985888d-7cpwq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-5b87k           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     8m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m8s   kube-proxy       
	  Normal  Starting                 8m30s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m30s  kubelet          Node addons-412183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s  kubelet          Node addons-412183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s  kubelet          Node addons-412183 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m29s  kubelet          Node addons-412183 status is now: NodeReady
	  Normal  RegisteredNode           8m17s  node-controller  Node addons-412183 event: Registered Node addons-412183 in Controller
	
	
	==> dmesg <==
	[  +0.160730] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.025683] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.260760] kauditd_printk_skb: 131 callbacks suppressed
	[  +5.960768] kauditd_printk_skb: 106 callbacks suppressed
	[ +13.789273] kauditd_printk_skb: 5 callbacks suppressed
	[Apr29 18:42] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.966614] kauditd_printk_skb: 4 callbacks suppressed
	[ +22.429210] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.033357] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.853048] kauditd_printk_skb: 58 callbacks suppressed
	[Apr29 18:43] kauditd_printk_skb: 2 callbacks suppressed
	[ +14.654654] kauditd_printk_skb: 24 callbacks suppressed
	[ +30.757856] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.624438] kauditd_printk_skb: 15 callbacks suppressed
	[Apr29 18:44] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.343314] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.779507] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.580158] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.793348] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.496820] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.910672] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.038568] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.011140] kauditd_printk_skb: 11 callbacks suppressed
	[Apr29 18:45] kauditd_printk_skb: 26 callbacks suppressed
	[Apr29 18:46] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [a28762184ca2929c27f2b4bee83875934d812823e05b56c5aab7c46ae6b05b2e] <==
	{"level":"warn","ts":"2024-04-29T18:42:49.515583Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:42:49.09942Z","time spent":"416.159281ms","remote":"127.0.0.1:51948","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85576,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-04-29T18:42:49.515734Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.37462ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-04-29T18:42:49.51587Z","caller":"traceutil/trace.go:171","msg":"trace[285637063] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1119; }","duration":"149.581537ms","start":"2024-04-29T18:42:49.366278Z","end":"2024-04-29T18:42:49.51586Z","steps":["trace[285637063] 'agreement among raft nodes before linearized reading'  (duration: 149.407347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:42:49.516018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.03821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-04-29T18:42:49.516071Z","caller":"traceutil/trace.go:171","msg":"trace[642312019] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1119; }","duration":"203.115082ms","start":"2024-04-29T18:42:49.312948Z","end":"2024-04-29T18:42:49.516063Z","steps":["trace[642312019] 'agreement among raft nodes before linearized reading'  (duration: 203.01565ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T18:43:54.772367Z","caller":"traceutil/trace.go:171","msg":"trace[152321535] transaction","detail":"{read_only:false; response_revision:1252; number_of_response:1; }","duration":"455.384004ms","start":"2024-04-29T18:43:54.316956Z","end":"2024-04-29T18:43:54.77234Z","steps":["trace[152321535] 'process raft request'  (duration: 455.026582ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:43:54.772573Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:43:54.316943Z","time spent":"455.561115ms","remote":"127.0.0.1:51926","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1250 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-29T18:43:54.773109Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"410.838063ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-04-29T18:43:54.773176Z","caller":"traceutil/trace.go:171","msg":"trace[884231050] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1252; }","duration":"410.930409ms","start":"2024-04-29T18:43:54.362239Z","end":"2024-04-29T18:43:54.773169Z","steps":["trace[884231050] 'agreement among raft nodes before linearized reading'  (duration: 410.781646ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:43:54.773225Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:43:54.362226Z","time spent":"410.992978ms","remote":"127.0.0.1:51948","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14386,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-04-29T18:43:54.772955Z","caller":"traceutil/trace.go:171","msg":"trace[244138109] linearizableReadLoop","detail":"{readStateIndex:1304; appliedIndex:1303; }","duration":"409.842008ms","start":"2024-04-29T18:43:54.362263Z","end":"2024-04-29T18:43:54.772105Z","steps":["trace[244138109] 'read index received'  (duration: 409.665865ms)","trace[244138109] 'applied index is now lower than readState.Index'  (duration: 175.629µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T18:43:54.773647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.586725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-04-29T18:43:54.773699Z","caller":"traceutil/trace.go:171","msg":"trace[964049683] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1252; }","duration":"104.670886ms","start":"2024-04-29T18:43:54.66902Z","end":"2024-04-29T18:43:54.773691Z","steps":["trace[964049683] 'agreement among raft nodes before linearized reading'  (duration: 104.552099ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T18:44:00.148956Z","caller":"traceutil/trace.go:171","msg":"trace[1072383503] linearizableReadLoop","detail":"{readStateIndex:1327; appliedIndex:1326; }","duration":"227.5973ms","start":"2024-04-29T18:43:59.921346Z","end":"2024-04-29T18:44:00.148943Z","steps":["trace[1072383503] 'read index received'  (duration: 227.376784ms)","trace[1072383503] 'applied index is now lower than readState.Index'  (duration: 220.115µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T18:44:00.149258Z","caller":"traceutil/trace.go:171","msg":"trace[790922415] transaction","detail":"{read_only:false; response_revision:1274; number_of_response:1; }","duration":"337.397027ms","start":"2024-04-29T18:43:59.81185Z","end":"2024-04-29T18:44:00.149247Z","steps":["trace[790922415] 'process raft request'  (duration: 336.993803ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:44:00.149424Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:43:59.811748Z","time spent":"337.61764ms","remote":"127.0.0.1:52042","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1254 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-04-29T18:44:00.149215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.87654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T18:44:00.149611Z","caller":"traceutil/trace.go:171","msg":"trace[1228796822] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1274; }","duration":"228.282757ms","start":"2024-04-29T18:43:59.921317Z","end":"2024-04-29T18:44:00.1496Z","steps":["trace[1228796822] 'agreement among raft nodes before linearized reading'  (duration: 227.844234ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T18:44:12.898724Z","caller":"traceutil/trace.go:171","msg":"trace[160721282] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1351; }","duration":"344.314789ms","start":"2024-04-29T18:44:12.554323Z","end":"2024-04-29T18:44:12.898638Z","steps":["trace[160721282] 'process raft request'  (duration: 344.100167ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T18:44:12.89913Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T18:44:12.554301Z","time spent":"344.585178ms","remote":"127.0.0.1:51840","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":57,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/events/gadget/gadget-4fdlj.17cad4596fac2a19\" mod_revision:609 > success:<request_delete_range:<key:\"/registry/events/gadget/gadget-4fdlj.17cad4596fac2a19\" > > failure:<request_range:<key:\"/registry/events/gadget/gadget-4fdlj.17cad4596fac2a19\" > >"}
	{"level":"warn","ts":"2024-04-29T18:44:22.154995Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.530619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T18:44:22.155146Z","caller":"traceutil/trace.go:171","msg":"trace[481488012] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1442; }","duration":"287.749967ms","start":"2024-04-29T18:44:21.867377Z","end":"2024-04-29T18:44:22.155127Z","steps":["trace[481488012] 'agreement among raft nodes before linearized reading'  (duration: 287.514228ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T18:44:22.155303Z","caller":"traceutil/trace.go:171","msg":"trace[31018762] linearizableReadLoop","detail":"{readStateIndex:1501; appliedIndex:1500; }","duration":"287.258903ms","start":"2024-04-29T18:44:21.867402Z","end":"2024-04-29T18:44:22.154661Z","steps":["trace[31018762] 'read index received'  (duration: 282.154297ms)","trace[31018762] 'applied index is now lower than readState.Index'  (duration: 5.103555ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T18:44:22.156952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.88133ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:16 size:78629"}
	{"level":"info","ts":"2024-04-29T18:44:22.157016Z","caller":"traceutil/trace.go:171","msg":"trace[637640349] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:16; response_revision:1442; }","duration":"249.980875ms","start":"2024-04-29T18:44:21.907027Z","end":"2024-04-29T18:44:22.157007Z","steps":["trace[637640349] 'agreement among raft nodes before linearized reading'  (duration: 249.590534ms)"],"step_count":1}
	
	
	==> gcp-auth [8658b8decf43f7b00b5234119193d5379dafa508b2458ebc721dcbcdd268dc60] <==
	2024/04/29 18:44:06 Ready to write response ...
	2024/04/29 18:44:11 Ready to marshal response ...
	2024/04/29 18:44:11 Ready to write response ...
	2024/04/29 18:44:14 Ready to marshal response ...
	2024/04/29 18:44:14 Ready to write response ...
	2024/04/29 18:44:25 Ready to marshal response ...
	2024/04/29 18:44:25 Ready to write response ...
	2024/04/29 18:44:32 Ready to marshal response ...
	2024/04/29 18:44:32 Ready to write response ...
	2024/04/29 18:44:36 Ready to marshal response ...
	2024/04/29 18:44:36 Ready to write response ...
	2024/04/29 18:44:36 Ready to marshal response ...
	2024/04/29 18:44:36 Ready to write response ...
	2024/04/29 18:44:39 Ready to marshal response ...
	2024/04/29 18:44:39 Ready to write response ...
	2024/04/29 18:44:49 Ready to marshal response ...
	2024/04/29 18:44:49 Ready to write response ...
	2024/04/29 18:45:04 Ready to marshal response ...
	2024/04/29 18:45:04 Ready to write response ...
	2024/04/29 18:45:04 Ready to marshal response ...
	2024/04/29 18:45:04 Ready to write response ...
	2024/04/29 18:45:04 Ready to marshal response ...
	2024/04/29 18:45:04 Ready to write response ...
	2024/04/29 18:46:40 Ready to marshal response ...
	2024/04/29 18:46:40 Ready to write response ...
	
	
	==> kernel <==
	 18:49:37 up 9 min,  0 users,  load average: 0.67, 0.78, 0.52
	Linux addons-412183 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a2791682e5b0aa0ce3e2020d5d6d2965aef373a33d2fab67a9a1c11ef1f17085] <==
	I0429 18:44:07.493367       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0429 18:44:08.523930       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0429 18:44:13.036454       1 trace.go:236] Trace[221412242]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:3f2ed355-877d-4537-9a65-eb696c18492b,client:192.168.39.105,api-group:,api-version:v1,name:,subresource:,namespace:gadget,protocol:HTTP/2.0,resource:events,scope:namespace,url:/api/v1/namespaces/gadget/events,user-agent:kube-controller-manager/v1.30.0 (linux/amd64) kubernetes/7c48c2b/system:serviceaccount:kube-system:namespace-controller,verb:DELETE (29-Apr-2024 18:44:12.536) (total time: 500ms):
	Trace[221412242]: ---"About to write a response" 494ms (18:44:13.036)
	Trace[221412242]: [500.010231ms] [500.010231ms] END
	I0429 18:44:14.003849       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0429 18:44:14.211078       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.151.215"}
	I0429 18:44:20.326080       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0429 18:44:30.113279       1 conn.go:339] Error on socket receive: read tcp 192.168.39.105:8443->192.168.39.1:37726: use of closed network connection
	E0429 18:44:30.257674       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.105:8443->10.244.0.26:46248: read: connection reset by peer
	I0429 18:44:56.182086       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 18:44:56.182162       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 18:44:56.202251       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 18:44:56.202371       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 18:44:56.222450       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 18:44:56.222549       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 18:44:56.230461       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 18:44:56.231024       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0429 18:44:56.256500       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0429 18:44:56.256693       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0429 18:44:57.222643       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0429 18:44:57.257300       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0429 18:44:57.274622       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0429 18:45:04.166929       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.83.62"}
	I0429 18:46:40.723494       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.150.197"}
	
	
	==> kube-controller-manager [8edae0c7e7e7b7865168e4f5d3654e0ac9e8c627d1323178a1618794e43e7b44] <==
	W0429 18:47:36.526037       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:47:36.526140       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:47:40.348654       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:47:40.348827       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:47:41.445641       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:47:41.445888       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:48:08.635687       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:48:08.636109       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:48:24.036077       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:48:24.036347       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:48:24.819869       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:48:24.820126       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:48:40.325519       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:48:40.325575       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:48:55.189556       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:48:55.189626       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:49:04.744642       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:49:04.744750       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:49:13.280216       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:49:13.280273       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:49:30.910998       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:49:30.911209       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0429 18:49:34.781871       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0429 18:49:34.781984       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0429 18:49:35.906490       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="9.554µs"
	
	
	==> kube-proxy [c4a23aee1a21bdea7a870774c664b1a6554a1007827af182017169b776d8cf3c] <==
	I0429 18:41:27.500198       1 server_linux.go:69] "Using iptables proxy"
	I0429 18:41:27.636947       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.105"]
	I0429 18:41:28.897531       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 18:41:28.898101       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 18:41:28.898148       1 server_linux.go:165] "Using iptables Proxier"
	I0429 18:41:28.971476       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 18:41:28.971652       1 server.go:872] "Version info" version="v1.30.0"
	I0429 18:41:28.971665       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 18:41:28.981078       1 config.go:192] "Starting service config controller"
	I0429 18:41:28.981092       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 18:41:28.981119       1 config.go:101] "Starting endpoint slice config controller"
	I0429 18:41:28.981123       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 18:41:28.981580       1 config.go:319] "Starting node config controller"
	I0429 18:41:28.981587       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 18:41:29.081941       1 shared_informer.go:320] Caches are synced for node config
	I0429 18:41:29.081966       1 shared_informer.go:320] Caches are synced for service config
	I0429 18:41:29.081993       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7ddb04a35645e46136c0d21b3330787d487d92ccfbc96de7a34f04aee8385685] <==
	W0429 18:41:05.951977       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 18:41:05.952089       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 18:41:06.031233       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 18:41:06.031734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 18:41:06.160065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 18:41:06.160204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 18:41:06.188159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 18:41:06.188246       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 18:41:06.189240       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 18:41:06.189349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 18:41:06.221036       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 18:41:06.221172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 18:41:06.221341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 18:41:06.221410       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 18:41:06.232294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 18:41:06.232484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 18:41:06.245830       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 18:41:06.245986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 18:41:06.382504       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 18:41:06.382820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 18:41:06.418162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 18:41:06.418957       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 18:41:06.592577       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 18:41:06.592632       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0429 18:41:09.616406       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 18:46:47 addons-412183 kubelet[1289]: I0429 18:46:47.677222    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7" path="/var/lib/kubelet/pods/16f0f69f-6e28-45a8-86f7-eb79bdf8ddf7/volumes"
	Apr 29 18:47:07 addons-412183 kubelet[1289]: E0429 18:47:07.698033    1289 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 18:47:07 addons-412183 kubelet[1289]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 18:47:07 addons-412183 kubelet[1289]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 18:47:07 addons-412183 kubelet[1289]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 18:47:07 addons-412183 kubelet[1289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 18:47:10 addons-412183 kubelet[1289]: I0429 18:47:10.470442    1289 scope.go:117] "RemoveContainer" containerID="f130515fc5d16af6b8751e730082a6f6943b5e91191980ebf594ba9df03676af"
	Apr 29 18:47:10 addons-412183 kubelet[1289]: I0429 18:47:10.493496    1289 scope.go:117] "RemoveContainer" containerID="3b907cfa2f261fccbbdee1832fe816e0252f80e1f6711ea66f7012b9c68c7c05"
	Apr 29 18:48:07 addons-412183 kubelet[1289]: E0429 18:48:07.698255    1289 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 18:48:07 addons-412183 kubelet[1289]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 18:48:07 addons-412183 kubelet[1289]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 18:48:07 addons-412183 kubelet[1289]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 18:48:07 addons-412183 kubelet[1289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 18:49:07 addons-412183 kubelet[1289]: E0429 18:49:07.699141    1289 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 18:49:07 addons-412183 kubelet[1289]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 18:49:07 addons-412183 kubelet[1289]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 18:49:07 addons-412183 kubelet[1289]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 18:49:07 addons-412183 kubelet[1289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 18:49:35 addons-412183 kubelet[1289]: I0429 18:49:35.925876    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-58mmg" podStartSLOduration=172.462512378 podStartE2EDuration="2m55.92584215s" podCreationTimestamp="2024-04-29 18:46:40 +0000 UTC" firstStartedPulling="2024-04-29 18:46:41.2055936 +0000 UTC m=+333.716543839" lastFinishedPulling="2024-04-29 18:46:44.668923373 +0000 UTC m=+337.179873611" observedRunningTime="2024-04-29 18:46:45.351665566 +0000 UTC m=+337.862615823" watchObservedRunningTime="2024-04-29 18:49:35.92584215 +0000 UTC m=+508.436792449"
	Apr 29 18:49:37 addons-412183 kubelet[1289]: I0429 18:49:37.274643    1289 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0d97597b-550d-4b86-850f-8b839281a545-tmp-dir\") pod \"0d97597b-550d-4b86-850f-8b839281a545\" (UID: \"0d97597b-550d-4b86-850f-8b839281a545\") "
	Apr 29 18:49:37 addons-412183 kubelet[1289]: I0429 18:49:37.274681    1289 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcv59\" (UniqueName: \"kubernetes.io/projected/0d97597b-550d-4b86-850f-8b839281a545-kube-api-access-tcv59\") pod \"0d97597b-550d-4b86-850f-8b839281a545\" (UID: \"0d97597b-550d-4b86-850f-8b839281a545\") "
	Apr 29 18:49:37 addons-412183 kubelet[1289]: I0429 18:49:37.275073    1289 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d97597b-550d-4b86-850f-8b839281a545-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "0d97597b-550d-4b86-850f-8b839281a545" (UID: "0d97597b-550d-4b86-850f-8b839281a545"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Apr 29 18:49:37 addons-412183 kubelet[1289]: I0429 18:49:37.290340    1289 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d97597b-550d-4b86-850f-8b839281a545-kube-api-access-tcv59" (OuterVolumeSpecName: "kube-api-access-tcv59") pod "0d97597b-550d-4b86-850f-8b839281a545" (UID: "0d97597b-550d-4b86-850f-8b839281a545"). InnerVolumeSpecName "kube-api-access-tcv59". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 29 18:49:37 addons-412183 kubelet[1289]: I0429 18:49:37.375668    1289 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0d97597b-550d-4b86-850f-8b839281a545-tmp-dir\") on node \"addons-412183\" DevicePath \"\""
	Apr 29 18:49:37 addons-412183 kubelet[1289]: I0429 18:49:37.375736    1289 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tcv59\" (UniqueName: \"kubernetes.io/projected/0d97597b-550d-4b86-850f-8b839281a545-kube-api-access-tcv59\") on node \"addons-412183\" DevicePath \"\""
	
	
	==> storage-provisioner [d6819fcea7b4fad8d8d7adc770f2b04a66dfcf100f35d5fb0f6b52e3f25813d9] <==
	I0429 18:41:31.610359       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 18:41:31.653680       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 18:41:31.653746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 18:41:31.719068       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 18:41:31.719257       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-412183_00060d43-6ab8-4be3-a0c0-3eff8a05ce05!
	I0429 18:41:31.755464       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"71759172-0ccc-47bf-b198-5b2da54db950", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-412183_00060d43-6ab8-4be3-a0c0-3eff8a05ce05 became leader
	I0429 18:41:31.920009       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-412183_00060d43-6ab8-4be3-a0c0-3eff8a05ce05!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-412183 -n addons-412183
helpers_test.go:261: (dbg) Run:  kubectl --context addons-412183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (337.59s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-412183
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-412183: exit status 82 (2m0.47159665s)

                                                
                                                
-- stdout --
	* Stopping node "addons-412183"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-412183" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-412183
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-412183: exit status 11 (21.692076664s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.105:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-412183" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-412183
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-412183: exit status 11 (6.144263623s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.105:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-412183" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-412183
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-412183: exit status 11 (6.143901347s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.105:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-412183" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image save gcr.io/google-containers/addon-resizer:functional-828689 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-828689 image save gcr.io/google-containers/addon-resizer:functional-828689 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.151903362s)
functional_test.go:385: expected "/home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0429 18:58:24.594284   26128 out.go:291] Setting OutFile to fd 1 ...
	I0429 18:58:24.594557   26128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:58:24.594566   26128 out.go:304] Setting ErrFile to fd 2...
	I0429 18:58:24.594571   26128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:58:24.594761   26128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 18:58:24.595309   26128 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:58:24.595402   26128 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:58:24.595769   26128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:58:24.595815   26128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:58:24.610493   26128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33463
	I0429 18:58:24.610994   26128 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:58:24.611629   26128 main.go:141] libmachine: Using API Version  1
	I0429 18:58:24.611665   26128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:58:24.612051   26128 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:58:24.612259   26128 main.go:141] libmachine: (functional-828689) Calling .GetState
	I0429 18:58:24.613999   26128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:58:24.614041   26128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:58:24.629290   26128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36807
	I0429 18:58:24.629789   26128 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:58:24.630244   26128 main.go:141] libmachine: Using API Version  1
	I0429 18:58:24.630270   26128 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:58:24.630611   26128 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:58:24.630764   26128 main.go:141] libmachine: (functional-828689) Calling .DriverName
	I0429 18:58:24.630962   26128 ssh_runner.go:195] Run: systemctl --version
	I0429 18:58:24.630988   26128 main.go:141] libmachine: (functional-828689) Calling .GetSSHHostname
	I0429 18:58:24.633787   26128 main.go:141] libmachine: (functional-828689) DBG | domain functional-828689 has defined MAC address 52:54:00:39:76:01 in network mk-functional-828689
	I0429 18:58:24.634261   26128 main.go:141] libmachine: (functional-828689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:76:01", ip: ""} in network mk-functional-828689: {Iface:virbr1 ExpiryTime:2024-04-29 19:53:44 +0000 UTC Type:0 Mac:52:54:00:39:76:01 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-828689 Clientid:01:52:54:00:39:76:01}
	I0429 18:58:24.634286   26128 main.go:141] libmachine: (functional-828689) DBG | domain functional-828689 has defined IP address 192.168.39.72 and MAC address 52:54:00:39:76:01 in network mk-functional-828689
	I0429 18:58:24.634463   26128 main.go:141] libmachine: (functional-828689) Calling .GetSSHPort
	I0429 18:58:24.634670   26128 main.go:141] libmachine: (functional-828689) Calling .GetSSHKeyPath
	I0429 18:58:24.634831   26128 main.go:141] libmachine: (functional-828689) Calling .GetSSHUsername
	I0429 18:58:24.634977   26128 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/functional-828689/id_rsa Username:docker}
	I0429 18:58:24.725239   26128 cache_images.go:286] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar
	W0429 18:58:24.725312   26128 cache_images.go:254] Failed to load cached images for profile functional-828689. make sure the profile is running. loading images: stat /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar: no such file or directory
	I0429 18:58:24.725335   26128 cache_images.go:262] succeeded pushing to: 
	I0429 18:58:24.725342   26128 cache_images.go:263] failed pushing to: functional-828689
	I0429 18:58:24.725367   26128 main.go:141] libmachine: Making call to close driver server
	I0429 18:58:24.725377   26128 main.go:141] libmachine: (functional-828689) Calling .Close
	I0429 18:58:24.725635   26128 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:58:24.725652   26128 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:58:24.725653   26128 main.go:141] libmachine: (functional-828689) DBG | Closing plugin on server side
	I0429 18:58:24.725665   26128 main.go:141] libmachine: Making call to close driver server
	I0429 18:58:24.725673   26128 main.go:141] libmachine: (functional-828689) Calling .Close
	I0429 18:58:24.726012   26128 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:58:24.726088   26128 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:58:24.726122   26128 main.go:141] libmachine: (functional-828689) DBG | Closing plugin on server side

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 node stop m02 -v=7 --alsologtostderr
E0429 19:05:32.757324   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-058855 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.484217139s)

                                                
                                                
-- stdout --
	* Stopping node "ha-058855-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:04:40.426897   32859 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:04:40.427025   32859 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:04:40.427037   32859 out.go:304] Setting ErrFile to fd 2...
	I0429 19:04:40.427043   32859 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:04:40.427230   32859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:04:40.427479   32859 mustload.go:65] Loading cluster: ha-058855
	I0429 19:04:40.427840   32859 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:04:40.427855   32859 stop.go:39] StopHost: ha-058855-m02
	I0429 19:04:40.428173   32859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:04:40.428207   32859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:04:40.445367   32859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38663
	I0429 19:04:40.445818   32859 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:04:40.446433   32859 main.go:141] libmachine: Using API Version  1
	I0429 19:04:40.446467   32859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:04:40.446832   32859 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:04:40.449114   32859 out.go:177] * Stopping node "ha-058855-m02"  ...
	I0429 19:04:40.450343   32859 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 19:04:40.450389   32859 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:04:40.450619   32859 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 19:04:40.450640   32859 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:04:40.453627   32859 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:04:40.454016   32859 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:04:40.454038   32859 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:04:40.454234   32859 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:04:40.454428   32859 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:04:40.454592   32859 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:04:40.454763   32859 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	I0429 19:04:40.546468   32859 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 19:04:40.603466   32859 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 19:04:40.660311   32859 main.go:141] libmachine: Stopping "ha-058855-m02"...
	I0429 19:04:40.660353   32859 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:04:40.661814   32859 main.go:141] libmachine: (ha-058855-m02) Calling .Stop
	I0429 19:04:40.665371   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 0/120
	I0429 19:04:41.666812   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 1/120
	I0429 19:04:42.668160   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 2/120
	I0429 19:04:43.669959   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 3/120
	I0429 19:04:44.671227   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 4/120
	I0429 19:04:45.673215   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 5/120
	I0429 19:04:46.674712   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 6/120
	I0429 19:04:47.676401   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 7/120
	I0429 19:04:48.678301   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 8/120
	I0429 19:04:49.679510   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 9/120
	I0429 19:04:50.681391   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 10/120
	I0429 19:04:51.682706   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 11/120
	I0429 19:04:52.684054   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 12/120
	I0429 19:04:53.686434   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 13/120
	I0429 19:04:54.687711   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 14/120
	I0429 19:04:55.689225   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 15/120
	I0429 19:04:56.690575   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 16/120
	I0429 19:04:57.691896   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 17/120
	I0429 19:04:58.693881   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 18/120
	I0429 19:04:59.695815   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 19/120
	I0429 19:05:00.698250   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 20/120
	I0429 19:05:01.700544   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 21/120
	I0429 19:05:02.702505   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 22/120
	I0429 19:05:03.704634   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 23/120
	I0429 19:05:04.705950   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 24/120
	I0429 19:05:05.708288   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 25/120
	I0429 19:05:06.709819   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 26/120
	I0429 19:05:07.711226   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 27/120
	I0429 19:05:08.712515   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 28/120
	I0429 19:05:09.713889   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 29/120
	I0429 19:05:10.715474   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 30/120
	I0429 19:05:11.716692   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 31/120
	I0429 19:05:12.717996   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 32/120
	I0429 19:05:13.719556   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 33/120
	I0429 19:05:14.720946   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 34/120
	I0429 19:05:15.722979   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 35/120
	I0429 19:05:16.724279   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 36/120
	I0429 19:05:17.725754   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 37/120
	I0429 19:05:18.727396   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 38/120
	I0429 19:05:19.729043   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 39/120
	I0429 19:05:20.731257   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 40/120
	I0429 19:05:21.733008   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 41/120
	I0429 19:05:22.734413   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 42/120
	I0429 19:05:23.736808   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 43/120
	I0429 19:05:24.738112   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 44/120
	I0429 19:05:25.739880   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 45/120
	I0429 19:05:26.741837   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 46/120
	I0429 19:05:27.743769   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 47/120
	I0429 19:05:28.745011   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 48/120
	I0429 19:05:29.746756   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 49/120
	I0429 19:05:30.748369   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 50/120
	I0429 19:05:31.749618   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 51/120
	I0429 19:05:32.751060   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 52/120
	I0429 19:05:33.752399   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 53/120
	I0429 19:05:34.753563   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 54/120
	I0429 19:05:35.754844   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 55/120
	I0429 19:05:36.756430   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 56/120
	I0429 19:05:37.757579   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 57/120
	I0429 19:05:38.759695   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 58/120
	I0429 19:05:39.761065   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 59/120
	I0429 19:05:40.763125   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 60/120
	I0429 19:05:41.764829   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 61/120
	I0429 19:05:42.766908   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 62/120
	I0429 19:05:43.768561   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 63/120
	I0429 19:05:44.770700   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 64/120
	I0429 19:05:45.772527   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 65/120
	I0429 19:05:46.773842   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 66/120
	I0429 19:05:47.775256   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 67/120
	I0429 19:05:48.776755   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 68/120
	I0429 19:05:49.778227   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 69/120
	I0429 19:05:50.780220   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 70/120
	I0429 19:05:51.781614   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 71/120
	I0429 19:05:52.783778   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 72/120
	I0429 19:05:53.785217   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 73/120
	I0429 19:05:54.786702   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 74/120
	I0429 19:05:55.788619   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 75/120
	I0429 19:05:56.790138   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 76/120
	I0429 19:05:57.791433   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 77/120
	I0429 19:05:58.792912   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 78/120
	I0429 19:05:59.794114   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 79/120
	I0429 19:06:00.796052   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 80/120
	I0429 19:06:01.797412   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 81/120
	I0429 19:06:02.798613   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 82/120
	I0429 19:06:03.799920   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 83/120
	I0429 19:06:04.800985   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 84/120
	I0429 19:06:05.802352   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 85/120
	I0429 19:06:06.803960   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 86/120
	I0429 19:06:07.805461   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 87/120
	I0429 19:06:08.807029   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 88/120
	I0429 19:06:09.808704   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 89/120
	I0429 19:06:10.809856   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 90/120
	I0429 19:06:11.811279   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 91/120
	I0429 19:06:12.813040   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 92/120
	I0429 19:06:13.815112   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 93/120
	I0429 19:06:14.816359   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 94/120
	I0429 19:06:15.818337   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 95/120
	I0429 19:06:16.819731   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 96/120
	I0429 19:06:17.821031   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 97/120
	I0429 19:06:18.822484   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 98/120
	I0429 19:06:19.823837   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 99/120
	I0429 19:06:20.825882   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 100/120
	I0429 19:06:21.827591   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 101/120
	I0429 19:06:22.828847   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 102/120
	I0429 19:06:23.831097   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 103/120
	I0429 19:06:24.832676   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 104/120
	I0429 19:06:25.834661   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 105/120
	I0429 19:06:26.836496   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 106/120
	I0429 19:06:27.837630   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 107/120
	I0429 19:06:28.839080   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 108/120
	I0429 19:06:29.840912   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 109/120
	I0429 19:06:30.842890   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 110/120
	I0429 19:06:31.844194   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 111/120
	I0429 19:06:32.845442   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 112/120
	I0429 19:06:33.846630   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 113/120
	I0429 19:06:34.848647   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 114/120
	I0429 19:06:35.850439   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 115/120
	I0429 19:06:36.851743   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 116/120
	I0429 19:06:37.853211   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 117/120
	I0429 19:06:38.854898   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 118/120
	I0429 19:06:39.856047   32859 main.go:141] libmachine: (ha-058855-m02) Waiting for machine to stop 119/120
	I0429 19:06:40.857028   32859 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0429 19:06:40.857193   32859 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-058855 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr: exit status 3 (19.23519502s)

                                                
                                                
-- stdout --
	ha-058855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-058855-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:06:40.911986   33990 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:06:40.912097   33990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:06:40.912106   33990 out.go:304] Setting ErrFile to fd 2...
	I0429 19:06:40.912110   33990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:06:40.912291   33990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:06:40.912455   33990 out.go:298] Setting JSON to false
	I0429 19:06:40.912483   33990 mustload.go:65] Loading cluster: ha-058855
	I0429 19:06:40.912569   33990 notify.go:220] Checking for updates...
	I0429 19:06:40.912866   33990 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:06:40.912882   33990 status.go:255] checking status of ha-058855 ...
	I0429 19:06:40.913283   33990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:06:40.913334   33990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:06:40.933759   33990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
	I0429 19:06:40.934190   33990 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:06:40.934751   33990 main.go:141] libmachine: Using API Version  1
	I0429 19:06:40.934773   33990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:06:40.935167   33990 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:06:40.935393   33990 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:06:40.936894   33990 status.go:330] ha-058855 host status = "Running" (err=<nil>)
	I0429 19:06:40.936908   33990 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:06:40.937185   33990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:06:40.937222   33990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:06:40.951804   33990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37183
	I0429 19:06:40.952287   33990 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:06:40.952773   33990 main.go:141] libmachine: Using API Version  1
	I0429 19:06:40.952800   33990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:06:40.953101   33990 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:06:40.953274   33990 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:06:40.955853   33990 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:06:40.956291   33990 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:06:40.956316   33990 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:06:40.956460   33990 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:06:40.956764   33990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:06:40.956799   33990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:06:40.971124   33990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36945
	I0429 19:06:40.971434   33990 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:06:40.971872   33990 main.go:141] libmachine: Using API Version  1
	I0429 19:06:40.971897   33990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:06:40.972169   33990 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:06:40.972348   33990 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:06:40.972531   33990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:06:40.972561   33990 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:06:40.975012   33990 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:06:40.975384   33990 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:06:40.975416   33990 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:06:40.975508   33990 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:06:40.975648   33990 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:06:40.975771   33990 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:06:40.975931   33990 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:06:41.064395   33990 ssh_runner.go:195] Run: systemctl --version
	I0429 19:06:41.072194   33990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:06:41.091457   33990 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:06:41.091483   33990 api_server.go:166] Checking apiserver status ...
	I0429 19:06:41.091516   33990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:06:41.110621   33990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0429 19:06:41.123562   33990 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:06:41.123644   33990 ssh_runner.go:195] Run: ls
	I0429 19:06:41.129620   33990 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:06:41.134803   33990 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:06:41.134851   33990 status.go:422] ha-058855 apiserver status = Running (err=<nil>)
	I0429 19:06:41.134866   33990 status.go:257] ha-058855 status: &{Name:ha-058855 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:06:41.134894   33990 status.go:255] checking status of ha-058855-m02 ...
	I0429 19:06:41.135199   33990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:06:41.135269   33990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:06:41.150433   33990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I0429 19:06:41.150841   33990 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:06:41.151286   33990 main.go:141] libmachine: Using API Version  1
	I0429 19:06:41.151309   33990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:06:41.151640   33990 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:06:41.151784   33990 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:06:41.153258   33990 status.go:330] ha-058855-m02 host status = "Running" (err=<nil>)
	I0429 19:06:41.153285   33990 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:06:41.153561   33990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:06:41.153599   33990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:06:41.168561   33990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36699
	I0429 19:06:41.168960   33990 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:06:41.169433   33990 main.go:141] libmachine: Using API Version  1
	I0429 19:06:41.169462   33990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:06:41.169776   33990 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:06:41.169956   33990 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:06:41.172878   33990 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:06:41.173313   33990 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:06:41.173336   33990 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:06:41.173498   33990 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:06:41.173836   33990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:06:41.173877   33990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:06:41.188770   33990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44227
	I0429 19:06:41.189263   33990 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:06:41.189785   33990 main.go:141] libmachine: Using API Version  1
	I0429 19:06:41.189808   33990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:06:41.190078   33990 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:06:41.190261   33990 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:06:41.190435   33990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:06:41.190457   33990 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:06:41.193359   33990 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:06:41.193848   33990 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:06:41.193875   33990 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:06:41.194039   33990 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:06:41.194223   33990 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:06:41.194371   33990 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:06:41.194527   33990 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	W0429 19:06:59.690330   33990 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0429 19:06:59.690437   33990 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0429 19:06:59.690462   33990 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:06:59.690473   33990 status.go:257] ha-058855-m02 status: &{Name:ha-058855-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 19:06:59.690527   33990 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:06:59.690541   33990 status.go:255] checking status of ha-058855-m03 ...
	I0429 19:06:59.690979   33990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:06:59.691038   33990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:06:59.706085   33990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38827
	I0429 19:06:59.706549   33990 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:06:59.707010   33990 main.go:141] libmachine: Using API Version  1
	I0429 19:06:59.707035   33990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:06:59.707320   33990 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:06:59.707533   33990 main.go:141] libmachine: (ha-058855-m03) Calling .GetState
	I0429 19:06:59.709349   33990 status.go:330] ha-058855-m03 host status = "Running" (err=<nil>)
	I0429 19:06:59.709369   33990 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:06:59.709762   33990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:06:59.709806   33990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:06:59.724604   33990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45947
	I0429 19:06:59.725071   33990 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:06:59.725534   33990 main.go:141] libmachine: Using API Version  1
	I0429 19:06:59.725551   33990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:06:59.725858   33990 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:06:59.726055   33990 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:06:59.728729   33990 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:06:59.729149   33990 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:06:59.729170   33990 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:06:59.729297   33990 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:06:59.729662   33990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:06:59.729696   33990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:06:59.746574   33990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38913
	I0429 19:06:59.747145   33990 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:06:59.747686   33990 main.go:141] libmachine: Using API Version  1
	I0429 19:06:59.747716   33990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:06:59.748048   33990 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:06:59.748288   33990 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:06:59.748477   33990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:06:59.748501   33990 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:06:59.751604   33990 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:06:59.752157   33990 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:06:59.752184   33990 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:06:59.752373   33990 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:06:59.752568   33990 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:06:59.752728   33990 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:06:59.752862   33990 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:06:59.848105   33990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:06:59.876096   33990 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:06:59.876121   33990 api_server.go:166] Checking apiserver status ...
	I0429 19:06:59.876149   33990 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:06:59.898586   33990 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0429 19:06:59.911052   33990 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:06:59.911120   33990 ssh_runner.go:195] Run: ls
	I0429 19:06:59.916970   33990 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:06:59.923386   33990 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:06:59.923416   33990 status.go:422] ha-058855-m03 apiserver status = Running (err=<nil>)
	I0429 19:06:59.923425   33990 status.go:257] ha-058855-m03 status: &{Name:ha-058855-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:06:59.923441   33990 status.go:255] checking status of ha-058855-m04 ...
	I0429 19:06:59.923740   33990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:06:59.923782   33990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:06:59.938766   33990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I0429 19:06:59.939222   33990 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:06:59.939771   33990 main.go:141] libmachine: Using API Version  1
	I0429 19:06:59.939801   33990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:06:59.940136   33990 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:06:59.940340   33990 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:06:59.941959   33990 status.go:330] ha-058855-m04 host status = "Running" (err=<nil>)
	I0429 19:06:59.941977   33990 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:06:59.942329   33990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:06:59.942371   33990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:06:59.957476   33990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44903
	I0429 19:06:59.957853   33990 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:06:59.958410   33990 main.go:141] libmachine: Using API Version  1
	I0429 19:06:59.958431   33990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:06:59.958742   33990 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:06:59.958955   33990 main.go:141] libmachine: (ha-058855-m04) Calling .GetIP
	I0429 19:06:59.961744   33990 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:06:59.962216   33990 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:06:59.962243   33990 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:06:59.962357   33990 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:06:59.962656   33990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:06:59.962697   33990 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:06:59.979749   33990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35209
	I0429 19:06:59.980223   33990 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:06:59.980747   33990 main.go:141] libmachine: Using API Version  1
	I0429 19:06:59.980775   33990 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:06:59.981039   33990 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:06:59.981210   33990 main.go:141] libmachine: (ha-058855-m04) Calling .DriverName
	I0429 19:06:59.981352   33990 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:06:59.981377   33990 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHHostname
	I0429 19:06:59.984056   33990 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:06:59.984444   33990 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:06:59.984474   33990 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:06:59.984584   33990 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHPort
	I0429 19:06:59.984758   33990 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHKeyPath
	I0429 19:06:59.984870   33990 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHUsername
	I0429 19:06:59.984953   33990 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m04/id_rsa Username:docker}
	I0429 19:07:00.072302   33990 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:00.091370   33990 status.go:257] ha-058855-m04 status: &{Name:ha-058855-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-058855 -n ha-058855
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-058855 logs -n 25: (1.633664552s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1826286980/001/cp-test_ha-058855-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855:/home/docker/cp-test_ha-058855-m03_ha-058855.txt                       |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855 sudo cat                                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m03_ha-058855.txt                                 |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m02:/home/docker/cp-test_ha-058855-m03_ha-058855-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m02 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m03_ha-058855-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04:/home/docker/cp-test_ha-058855-m03_ha-058855-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m04 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m03_ha-058855-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-058855 cp testdata/cp-test.txt                                                | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1826286980/001/cp-test_ha-058855-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855:/home/docker/cp-test_ha-058855-m04_ha-058855.txt                       |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855 sudo cat                                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m04_ha-058855.txt                                 |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m02:/home/docker/cp-test_ha-058855-m04_ha-058855-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m02 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m04_ha-058855-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03:/home/docker/cp-test_ha-058855-m04_ha-058855-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m03 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m04_ha-058855-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-058855 node stop m02 -v=7                                                     | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 18:58:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 18:58:45.981713   26778 out.go:291] Setting OutFile to fd 1 ...
	I0429 18:58:45.982017   26778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:58:45.982030   26778 out.go:304] Setting ErrFile to fd 2...
	I0429 18:58:45.982037   26778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:58:45.982269   26778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 18:58:45.982917   26778 out.go:298] Setting JSON to false
	I0429 18:58:45.983844   26778 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2424,"bootTime":1714414702,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 18:58:45.983913   26778 start.go:139] virtualization: kvm guest
	I0429 18:58:45.986353   26778 out.go:177] * [ha-058855] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 18:58:45.988095   26778 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 18:58:45.988015   26778 notify.go:220] Checking for updates...
	I0429 18:58:45.991450   26778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 18:58:45.992910   26778 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:58:45.994268   26778 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:58:45.995790   26778 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 18:58:45.997240   26778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 18:58:46.005382   26778 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 18:58:46.041163   26778 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 18:58:46.042692   26778 start.go:297] selected driver: kvm2
	I0429 18:58:46.042705   26778 start.go:901] validating driver "kvm2" against <nil>
	I0429 18:58:46.042718   26778 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 18:58:46.043374   26778 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:58:46.043450   26778 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 18:58:46.058631   26778 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 18:58:46.058717   26778 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 18:58:46.059010   26778 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 18:58:46.059085   26778 cni.go:84] Creating CNI manager for ""
	I0429 18:58:46.059101   26778 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 18:58:46.059106   26778 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 18:58:46.059194   26778 start.go:340] cluster config:
	{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0429 18:58:46.059344   26778 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:58:46.062290   26778 out.go:177] * Starting "ha-058855" primary control-plane node in "ha-058855" cluster
	I0429 18:58:46.063881   26778 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 18:58:46.063918   26778 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 18:58:46.063925   26778 cache.go:56] Caching tarball of preloaded images
	I0429 18:58:46.064026   26778 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 18:58:46.064036   26778 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 18:58:46.064344   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 18:58:46.064366   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json: {Name:mk48010ce9611f8eba62bb08b5dc0da5b3034370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:58:46.064489   26778 start.go:360] acquireMachinesLock for ha-058855: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 18:58:46.064516   26778 start.go:364] duration metric: took 14.602µs to acquireMachinesLock for "ha-058855"
	I0429 18:58:46.064533   26778 start.go:93] Provisioning new machine with config: &{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 18:58:46.064590   26778 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 18:58:46.066349   26778 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 18:58:46.066478   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:58:46.066510   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:58:46.080288   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0429 18:58:46.080776   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:58:46.081375   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:58:46.081401   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:58:46.081731   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:58:46.081953   26778 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 18:58:46.082148   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:58:46.082320   26778 start.go:159] libmachine.API.Create for "ha-058855" (driver="kvm2")
	I0429 18:58:46.082360   26778 client.go:168] LocalClient.Create starting
	I0429 18:58:46.082398   26778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 18:58:46.082441   26778 main.go:141] libmachine: Decoding PEM data...
	I0429 18:58:46.082461   26778 main.go:141] libmachine: Parsing certificate...
	I0429 18:58:46.082546   26778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 18:58:46.082578   26778 main.go:141] libmachine: Decoding PEM data...
	I0429 18:58:46.082603   26778 main.go:141] libmachine: Parsing certificate...
	I0429 18:58:46.082635   26778 main.go:141] libmachine: Running pre-create checks...
	I0429 18:58:46.082648   26778 main.go:141] libmachine: (ha-058855) Calling .PreCreateCheck
	I0429 18:58:46.082977   26778 main.go:141] libmachine: (ha-058855) Calling .GetConfigRaw
	I0429 18:58:46.083418   26778 main.go:141] libmachine: Creating machine...
	I0429 18:58:46.083438   26778 main.go:141] libmachine: (ha-058855) Calling .Create
	I0429 18:58:46.083581   26778 main.go:141] libmachine: (ha-058855) Creating KVM machine...
	I0429 18:58:46.084823   26778 main.go:141] libmachine: (ha-058855) DBG | found existing default KVM network
	I0429 18:58:46.085443   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:46.085290   26801 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1e0}
	I0429 18:58:46.085462   26778 main.go:141] libmachine: (ha-058855) DBG | created network xml: 
	I0429 18:58:46.085476   26778 main.go:141] libmachine: (ha-058855) DBG | <network>
	I0429 18:58:46.085484   26778 main.go:141] libmachine: (ha-058855) DBG |   <name>mk-ha-058855</name>
	I0429 18:58:46.085493   26778 main.go:141] libmachine: (ha-058855) DBG |   <dns enable='no'/>
	I0429 18:58:46.085607   26778 main.go:141] libmachine: (ha-058855) DBG |   
	I0429 18:58:46.085628   26778 main.go:141] libmachine: (ha-058855) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 18:58:46.085637   26778 main.go:141] libmachine: (ha-058855) DBG |     <dhcp>
	I0429 18:58:46.085645   26778 main.go:141] libmachine: (ha-058855) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 18:58:46.085653   26778 main.go:141] libmachine: (ha-058855) DBG |     </dhcp>
	I0429 18:58:46.085660   26778 main.go:141] libmachine: (ha-058855) DBG |   </ip>
	I0429 18:58:46.085668   26778 main.go:141] libmachine: (ha-058855) DBG |   
	I0429 18:58:46.085674   26778 main.go:141] libmachine: (ha-058855) DBG | </network>
	I0429 18:58:46.085681   26778 main.go:141] libmachine: (ha-058855) DBG | 
	I0429 18:58:46.090762   26778 main.go:141] libmachine: (ha-058855) DBG | trying to create private KVM network mk-ha-058855 192.168.39.0/24...
	I0429 18:58:46.156910   26778 main.go:141] libmachine: (ha-058855) DBG | private KVM network mk-ha-058855 192.168.39.0/24 created
	I0429 18:58:46.156949   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:46.156890   26801 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:58:46.156961   26778 main.go:141] libmachine: (ha-058855) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855 ...
	I0429 18:58:46.156988   26778 main.go:141] libmachine: (ha-058855) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 18:58:46.157020   26778 main.go:141] libmachine: (ha-058855) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 18:58:46.384628   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:46.384497   26801 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa...
	I0429 18:58:46.506043   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:46.505915   26801 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/ha-058855.rawdisk...
	I0429 18:58:46.506095   26778 main.go:141] libmachine: (ha-058855) DBG | Writing magic tar header
	I0429 18:58:46.506117   26778 main.go:141] libmachine: (ha-058855) DBG | Writing SSH key tar header
	I0429 18:58:46.506128   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:46.506029   26801 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855 ...
	I0429 18:58:46.506190   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855
	I0429 18:58:46.506224   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 18:58:46.506241   26778 main.go:141] libmachine: (ha-058855) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855 (perms=drwx------)
	I0429 18:58:46.506254   26778 main.go:141] libmachine: (ha-058855) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 18:58:46.506260   26778 main.go:141] libmachine: (ha-058855) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 18:58:46.506267   26778 main.go:141] libmachine: (ha-058855) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 18:58:46.506274   26778 main.go:141] libmachine: (ha-058855) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 18:58:46.506283   26778 main.go:141] libmachine: (ha-058855) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 18:58:46.506294   26778 main.go:141] libmachine: (ha-058855) Creating domain...
	I0429 18:58:46.506309   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:58:46.506324   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 18:58:46.506335   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 18:58:46.506346   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home/jenkins
	I0429 18:58:46.506353   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home
	I0429 18:58:46.506364   26778 main.go:141] libmachine: (ha-058855) DBG | Skipping /home - not owner
	I0429 18:58:46.507454   26778 main.go:141] libmachine: (ha-058855) define libvirt domain using xml: 
	I0429 18:58:46.507486   26778 main.go:141] libmachine: (ha-058855) <domain type='kvm'>
	I0429 18:58:46.507498   26778 main.go:141] libmachine: (ha-058855)   <name>ha-058855</name>
	I0429 18:58:46.507513   26778 main.go:141] libmachine: (ha-058855)   <memory unit='MiB'>2200</memory>
	I0429 18:58:46.507528   26778 main.go:141] libmachine: (ha-058855)   <vcpu>2</vcpu>
	I0429 18:58:46.507538   26778 main.go:141] libmachine: (ha-058855)   <features>
	I0429 18:58:46.507545   26778 main.go:141] libmachine: (ha-058855)     <acpi/>
	I0429 18:58:46.507550   26778 main.go:141] libmachine: (ha-058855)     <apic/>
	I0429 18:58:46.507557   26778 main.go:141] libmachine: (ha-058855)     <pae/>
	I0429 18:58:46.507565   26778 main.go:141] libmachine: (ha-058855)     
	I0429 18:58:46.507574   26778 main.go:141] libmachine: (ha-058855)   </features>
	I0429 18:58:46.507584   26778 main.go:141] libmachine: (ha-058855)   <cpu mode='host-passthrough'>
	I0429 18:58:46.507605   26778 main.go:141] libmachine: (ha-058855)   
	I0429 18:58:46.507620   26778 main.go:141] libmachine: (ha-058855)   </cpu>
	I0429 18:58:46.507636   26778 main.go:141] libmachine: (ha-058855)   <os>
	I0429 18:58:46.507648   26778 main.go:141] libmachine: (ha-058855)     <type>hvm</type>
	I0429 18:58:46.507657   26778 main.go:141] libmachine: (ha-058855)     <boot dev='cdrom'/>
	I0429 18:58:46.507673   26778 main.go:141] libmachine: (ha-058855)     <boot dev='hd'/>
	I0429 18:58:46.507681   26778 main.go:141] libmachine: (ha-058855)     <bootmenu enable='no'/>
	I0429 18:58:46.507685   26778 main.go:141] libmachine: (ha-058855)   </os>
	I0429 18:58:46.507694   26778 main.go:141] libmachine: (ha-058855)   <devices>
	I0429 18:58:46.507699   26778 main.go:141] libmachine: (ha-058855)     <disk type='file' device='cdrom'>
	I0429 18:58:46.507709   26778 main.go:141] libmachine: (ha-058855)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/boot2docker.iso'/>
	I0429 18:58:46.507715   26778 main.go:141] libmachine: (ha-058855)       <target dev='hdc' bus='scsi'/>
	I0429 18:58:46.507720   26778 main.go:141] libmachine: (ha-058855)       <readonly/>
	I0429 18:58:46.507725   26778 main.go:141] libmachine: (ha-058855)     </disk>
	I0429 18:58:46.507733   26778 main.go:141] libmachine: (ha-058855)     <disk type='file' device='disk'>
	I0429 18:58:46.507752   26778 main.go:141] libmachine: (ha-058855)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 18:58:46.507763   26778 main.go:141] libmachine: (ha-058855)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/ha-058855.rawdisk'/>
	I0429 18:58:46.507768   26778 main.go:141] libmachine: (ha-058855)       <target dev='hda' bus='virtio'/>
	I0429 18:58:46.507777   26778 main.go:141] libmachine: (ha-058855)     </disk>
	I0429 18:58:46.507781   26778 main.go:141] libmachine: (ha-058855)     <interface type='network'>
	I0429 18:58:46.507787   26778 main.go:141] libmachine: (ha-058855)       <source network='mk-ha-058855'/>
	I0429 18:58:46.507792   26778 main.go:141] libmachine: (ha-058855)       <model type='virtio'/>
	I0429 18:58:46.507797   26778 main.go:141] libmachine: (ha-058855)     </interface>
	I0429 18:58:46.507804   26778 main.go:141] libmachine: (ha-058855)     <interface type='network'>
	I0429 18:58:46.507816   26778 main.go:141] libmachine: (ha-058855)       <source network='default'/>
	I0429 18:58:46.507825   26778 main.go:141] libmachine: (ha-058855)       <model type='virtio'/>
	I0429 18:58:46.507839   26778 main.go:141] libmachine: (ha-058855)     </interface>
	I0429 18:58:46.507856   26778 main.go:141] libmachine: (ha-058855)     <serial type='pty'>
	I0429 18:58:46.507869   26778 main.go:141] libmachine: (ha-058855)       <target port='0'/>
	I0429 18:58:46.507875   26778 main.go:141] libmachine: (ha-058855)     </serial>
	I0429 18:58:46.507880   26778 main.go:141] libmachine: (ha-058855)     <console type='pty'>
	I0429 18:58:46.507888   26778 main.go:141] libmachine: (ha-058855)       <target type='serial' port='0'/>
	I0429 18:58:46.507893   26778 main.go:141] libmachine: (ha-058855)     </console>
	I0429 18:58:46.507900   26778 main.go:141] libmachine: (ha-058855)     <rng model='virtio'>
	I0429 18:58:46.507907   26778 main.go:141] libmachine: (ha-058855)       <backend model='random'>/dev/random</backend>
	I0429 18:58:46.507914   26778 main.go:141] libmachine: (ha-058855)     </rng>
	I0429 18:58:46.507922   26778 main.go:141] libmachine: (ha-058855)     
	I0429 18:58:46.507932   26778 main.go:141] libmachine: (ha-058855)     
	I0429 18:58:46.507950   26778 main.go:141] libmachine: (ha-058855)   </devices>
	I0429 18:58:46.507963   26778 main.go:141] libmachine: (ha-058855) </domain>
	I0429 18:58:46.507972   26778 main.go:141] libmachine: (ha-058855) 
	I0429 18:58:46.512516   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:30:77:6b in network default
	I0429 18:58:46.513053   26778 main.go:141] libmachine: (ha-058855) Ensuring networks are active...
	I0429 18:58:46.513097   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:46.513811   26778 main.go:141] libmachine: (ha-058855) Ensuring network default is active
	I0429 18:58:46.514219   26778 main.go:141] libmachine: (ha-058855) Ensuring network mk-ha-058855 is active
	I0429 18:58:46.514729   26778 main.go:141] libmachine: (ha-058855) Getting domain xml...
	I0429 18:58:46.515445   26778 main.go:141] libmachine: (ha-058855) Creating domain...
	I0429 18:58:47.715436   26778 main.go:141] libmachine: (ha-058855) Waiting to get IP...
	I0429 18:58:47.716319   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:47.716824   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:47.716864   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:47.716812   26801 retry.go:31] will retry after 294.883019ms: waiting for machine to come up
	I0429 18:58:48.013525   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:48.013974   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:48.014007   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:48.013934   26801 retry.go:31] will retry after 307.387741ms: waiting for machine to come up
	I0429 18:58:48.323461   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:48.323911   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:48.323934   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:48.323850   26801 retry.go:31] will retry after 334.207259ms: waiting for machine to come up
	I0429 18:58:48.659277   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:48.659684   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:48.659708   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:48.659648   26801 retry.go:31] will retry after 571.775593ms: waiting for machine to come up
	I0429 18:58:49.234694   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:49.235194   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:49.235221   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:49.235135   26801 retry.go:31] will retry after 502.125919ms: waiting for machine to come up
	I0429 18:58:49.738943   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:49.739428   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:49.739453   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:49.739378   26801 retry.go:31] will retry after 813.308401ms: waiting for machine to come up
	I0429 18:58:50.554246   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:50.554670   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:50.554703   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:50.554619   26801 retry.go:31] will retry after 1.177820988s: waiting for machine to come up
	I0429 18:58:51.734420   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:51.734872   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:51.734902   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:51.734817   26801 retry.go:31] will retry after 1.480258642s: waiting for machine to come up
	I0429 18:58:53.217397   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:53.217886   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:53.217905   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:53.217838   26801 retry.go:31] will retry after 1.797890934s: waiting for machine to come up
	I0429 18:58:55.018030   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:55.018466   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:55.018495   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:55.018423   26801 retry.go:31] will retry after 1.659555309s: waiting for machine to come up
	I0429 18:58:56.679239   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:56.679663   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:56.679693   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:56.679609   26801 retry.go:31] will retry after 2.631753998s: waiting for machine to come up
	I0429 18:58:59.314308   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:59.314778   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:59.314801   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:59.314737   26801 retry.go:31] will retry after 2.503386337s: waiting for machine to come up
	I0429 18:59:01.820186   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:01.820581   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:59:01.820608   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:59:01.820544   26801 retry.go:31] will retry after 4.232745054s: waiting for machine to come up
	I0429 18:59:06.057826   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:06.058177   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:59:06.058199   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:59:06.058134   26801 retry.go:31] will retry after 4.272974766s: waiting for machine to come up
	I0429 18:59:10.335751   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.336226   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has current primary IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.336241   26778 main.go:141] libmachine: (ha-058855) Found IP for machine: 192.168.39.52
	I0429 18:59:10.336254   26778 main.go:141] libmachine: (ha-058855) Reserving static IP address...
	I0429 18:59:10.336605   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find host DHCP lease matching {name: "ha-058855", mac: "52:54:00:bf:0c:a5", ip: "192.168.39.52"} in network mk-ha-058855
	I0429 18:59:10.407735   26778 main.go:141] libmachine: (ha-058855) DBG | Getting to WaitForSSH function...
	I0429 18:59:10.407762   26778 main.go:141] libmachine: (ha-058855) Reserved static IP address: 192.168.39.52
	I0429 18:59:10.407775   26778 main.go:141] libmachine: (ha-058855) Waiting for SSH to be available...
	I0429 18:59:10.409898   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.410305   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:10.410335   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.410451   26778 main.go:141] libmachine: (ha-058855) DBG | Using SSH client type: external
	I0429 18:59:10.410480   26778 main.go:141] libmachine: (ha-058855) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa (-rw-------)
	I0429 18:59:10.410512   26778 main.go:141] libmachine: (ha-058855) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 18:59:10.410523   26778 main.go:141] libmachine: (ha-058855) DBG | About to run SSH command:
	I0429 18:59:10.410550   26778 main.go:141] libmachine: (ha-058855) DBG | exit 0
	I0429 18:59:10.538010   26778 main.go:141] libmachine: (ha-058855) DBG | SSH cmd err, output: <nil>: 
	I0429 18:59:10.538317   26778 main.go:141] libmachine: (ha-058855) KVM machine creation complete!
	I0429 18:59:10.538640   26778 main.go:141] libmachine: (ha-058855) Calling .GetConfigRaw
	I0429 18:59:10.539113   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:10.539325   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:10.539469   26778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 18:59:10.539487   26778 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 18:59:10.540716   26778 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 18:59:10.540733   26778 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 18:59:10.540741   26778 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 18:59:10.540748   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:10.542802   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.543156   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:10.543178   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.543291   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:10.543460   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.543599   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.543743   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:10.543893   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 18:59:10.544113   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 18:59:10.544125   26778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 18:59:10.653739   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 18:59:10.653771   26778 main.go:141] libmachine: Detecting the provisioner...
	I0429 18:59:10.653784   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:10.656716   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.657192   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:10.657220   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.657378   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:10.657611   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.657816   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.657959   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:10.658145   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 18:59:10.658304   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 18:59:10.658314   26778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 18:59:10.771272   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 18:59:10.771353   26778 main.go:141] libmachine: found compatible host: buildroot
	I0429 18:59:10.771372   26778 main.go:141] libmachine: Provisioning with buildroot...
	I0429 18:59:10.771382   26778 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 18:59:10.771603   26778 buildroot.go:166] provisioning hostname "ha-058855"
	I0429 18:59:10.771625   26778 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 18:59:10.771832   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:10.774384   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.774652   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:10.774680   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.774825   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:10.774998   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.775152   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.775291   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:10.775441   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 18:59:10.775622   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 18:59:10.775644   26778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-058855 && echo "ha-058855" | sudo tee /etc/hostname
	I0429 18:59:10.907073   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-058855
	
	I0429 18:59:10.907102   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:10.909812   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.910149   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:10.910175   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.910338   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:10.910522   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.910657   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.910756   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:10.910877   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 18:59:10.911068   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 18:59:10.911087   26778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-058855' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-058855/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-058855' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 18:59:11.033157   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 18:59:11.033184   26778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 18:59:11.033210   26778 buildroot.go:174] setting up certificates
	I0429 18:59:11.033224   26778 provision.go:84] configureAuth start
	I0429 18:59:11.033238   26778 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 18:59:11.033492   26778 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 18:59:11.035787   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.036077   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.036105   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.036231   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.037934   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.038280   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.038310   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.038437   26778 provision.go:143] copyHostCerts
	I0429 18:59:11.038468   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 18:59:11.038501   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 18:59:11.038510   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 18:59:11.038577   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 18:59:11.038671   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 18:59:11.038688   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 18:59:11.038695   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 18:59:11.038732   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 18:59:11.038776   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 18:59:11.038792   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 18:59:11.038799   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 18:59:11.038818   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 18:59:11.038863   26778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.ha-058855 san=[127.0.0.1 192.168.39.52 ha-058855 localhost minikube]
	I0429 18:59:11.182794   26778 provision.go:177] copyRemoteCerts
	I0429 18:59:11.182851   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 18:59:11.182875   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.185284   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.185569   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.185598   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.185753   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:11.185951   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.186242   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:11.186394   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 18:59:11.273680   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 18:59:11.273764   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 18:59:11.299852   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 18:59:11.299907   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0429 18:59:11.325706   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 18:59:11.325772   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 18:59:11.351636   26778 provision.go:87] duration metric: took 318.397502ms to configureAuth
	I0429 18:59:11.351665   26778 buildroot.go:189] setting minikube options for container-runtime
	I0429 18:59:11.351840   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:59:11.351913   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.354032   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.354302   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.354337   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.354455   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:11.354642   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.354845   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.354990   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:11.355156   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 18:59:11.355310   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 18:59:11.355326   26778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 18:59:11.637312   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 18:59:11.637335   26778 main.go:141] libmachine: Checking connection to Docker...
	I0429 18:59:11.637343   26778 main.go:141] libmachine: (ha-058855) Calling .GetURL
	I0429 18:59:11.638553   26778 main.go:141] libmachine: (ha-058855) DBG | Using libvirt version 6000000
	I0429 18:59:11.640422   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.640675   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.640702   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.640873   26778 main.go:141] libmachine: Docker is up and running!
	I0429 18:59:11.640889   26778 main.go:141] libmachine: Reticulating splines...
	I0429 18:59:11.640895   26778 client.go:171] duration metric: took 25.558524436s to LocalClient.Create
	I0429 18:59:11.640918   26778 start.go:167] duration metric: took 25.558599994s to libmachine.API.Create "ha-058855"
	I0429 18:59:11.640933   26778 start.go:293] postStartSetup for "ha-058855" (driver="kvm2")
	I0429 18:59:11.640945   26778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 18:59:11.640960   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:11.641191   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 18:59:11.641212   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.643096   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.643389   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.643411   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.643515   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:11.643725   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.643870   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:11.644003   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 18:59:11.729083   26778 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 18:59:11.733711   26778 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 18:59:11.733734   26778 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 18:59:11.733784   26778 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 18:59:11.733870   26778 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 18:59:11.733881   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /etc/ssl/certs/151242.pem
	I0429 18:59:11.733969   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 18:59:11.743613   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 18:59:11.770152   26778 start.go:296] duration metric: took 129.204352ms for postStartSetup
	I0429 18:59:11.770203   26778 main.go:141] libmachine: (ha-058855) Calling .GetConfigRaw
	I0429 18:59:11.770756   26778 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 18:59:11.773181   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.773512   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.773541   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.773756   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 18:59:11.773945   26778 start.go:128] duration metric: took 25.709346707s to createHost
	I0429 18:59:11.773976   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.776279   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.776624   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.776654   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.776800   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:11.776996   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.777146   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.777278   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:11.777432   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 18:59:11.777587   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 18:59:11.777601   26778 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 18:59:11.891562   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714417151.879654551
	
	I0429 18:59:11.891593   26778 fix.go:216] guest clock: 1714417151.879654551
	I0429 18:59:11.891602   26778 fix.go:229] Guest: 2024-04-29 18:59:11.879654551 +0000 UTC Remote: 2024-04-29 18:59:11.773965638 +0000 UTC m=+25.839178511 (delta=105.688913ms)
	I0429 18:59:11.891648   26778 fix.go:200] guest clock delta is within tolerance: 105.688913ms
	I0429 18:59:11.891653   26778 start.go:83] releasing machines lock for "ha-058855", held for 25.827128697s
	I0429 18:59:11.891674   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:11.891975   26778 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 18:59:11.894291   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.894604   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.894631   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.894744   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:11.895325   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:11.895490   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:11.895573   26778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 18:59:11.895615   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.895723   26778 ssh_runner.go:195] Run: cat /version.json
	I0429 18:59:11.895749   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.898017   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.898261   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.898293   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.898312   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.898441   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:11.898618   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.898660   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.898694   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.898795   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:11.898850   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:11.898933   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 18:59:11.899005   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.899114   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:11.899217   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 18:59:11.980019   26778 ssh_runner.go:195] Run: systemctl --version
	I0429 18:59:12.005681   26778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 18:59:12.171140   26778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 18:59:12.177944   26778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 18:59:12.178009   26778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 18:59:12.197532   26778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 18:59:12.197559   26778 start.go:494] detecting cgroup driver to use...
	I0429 18:59:12.197626   26778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 18:59:12.215950   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 18:59:12.230970   26778 docker.go:217] disabling cri-docker service (if available) ...
	I0429 18:59:12.231018   26778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 18:59:12.245693   26778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 18:59:12.259626   26778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 18:59:12.384318   26778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 18:59:12.537847   26778 docker.go:233] disabling docker service ...
	I0429 18:59:12.537927   26778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 18:59:12.553895   26778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 18:59:12.568500   26778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 18:59:12.700131   26778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 18:59:12.839476   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 18:59:12.855048   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 18:59:12.875486   26778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 18:59:12.875565   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.886836   26778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 18:59:12.886899   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.898135   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.908886   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.920104   26778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 18:59:12.931187   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.942089   26778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.961928   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.974299   26778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 18:59:12.985323   26778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 18:59:12.985366   26778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 18:59:12.999894   26778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 18:59:13.011289   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:59:13.150511   26778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 18:59:13.304012   26778 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 18:59:13.304087   26778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 18:59:13.309763   26778 start.go:562] Will wait 60s for crictl version
	I0429 18:59:13.309832   26778 ssh_runner.go:195] Run: which crictl
	I0429 18:59:13.314458   26778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 18:59:13.357508   26778 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 18:59:13.357611   26778 ssh_runner.go:195] Run: crio --version
	I0429 18:59:13.390289   26778 ssh_runner.go:195] Run: crio --version
	I0429 18:59:13.424211   26778 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 18:59:13.425715   26778 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 18:59:13.428241   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:13.428590   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:13.428621   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:13.428841   26778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 18:59:13.433495   26778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 18:59:13.447818   26778 kubeadm.go:877] updating cluster {Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 18:59:13.447940   26778 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 18:59:13.447983   26778 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 18:59:13.483877   26778 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 18:59:13.483944   26778 ssh_runner.go:195] Run: which lz4
	I0429 18:59:13.488489   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 18:59:13.488585   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 18:59:13.493494   26778 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 18:59:13.493532   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 18:59:15.200881   26778 crio.go:462] duration metric: took 1.712326187s to copy over tarball
	I0429 18:59:15.200951   26778 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 18:59:17.696525   26778 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.49555051s)
	I0429 18:59:17.696555   26778 crio.go:469] duration metric: took 2.495646439s to extract the tarball
	I0429 18:59:17.696562   26778 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 18:59:17.736827   26778 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 18:59:17.786117   26778 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 18:59:17.786142   26778 cache_images.go:84] Images are preloaded, skipping loading
	I0429 18:59:17.786151   26778 kubeadm.go:928] updating node { 192.168.39.52 8443 v1.30.0 crio true true} ...
	I0429 18:59:17.786291   26778 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-058855 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 18:59:17.786379   26778 ssh_runner.go:195] Run: crio config
	I0429 18:59:17.844413   26778 cni.go:84] Creating CNI manager for ""
	I0429 18:59:17.844436   26778 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 18:59:17.844448   26778 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 18:59:17.844466   26778 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.52 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-058855 NodeName:ha-058855 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 18:59:17.844603   26778 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-058855"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 18:59:17.844627   26778 kube-vip.go:115] generating kube-vip config ...
	I0429 18:59:17.844665   26778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 18:59:17.865139   26778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 18:59:17.865253   26778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 18:59:17.865324   26778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 18:59:17.876875   26778 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 18:59:17.876940   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 18:59:17.887859   26778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 18:59:17.907865   26778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 18:59:17.927443   26778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0429 18:59:17.946838   26778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0429 18:59:17.965580   26778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 18:59:17.970566   26778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 18:59:17.985377   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:59:18.107795   26778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 18:59:18.126577   26778 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855 for IP: 192.168.39.52
	I0429 18:59:18.126602   26778 certs.go:194] generating shared ca certs ...
	I0429 18:59:18.126623   26778 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.126802   26778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 18:59:18.126863   26778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 18:59:18.126877   26778 certs.go:256] generating profile certs ...
	I0429 18:59:18.126972   26778 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key
	I0429 18:59:18.126992   26778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.crt with IP's: []
	I0429 18:59:18.338614   26778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.crt ...
	I0429 18:59:18.338646   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.crt: {Name:mk2faac6a398f89a4d1a9a126033354d7bde59ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.338808   26778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key ...
	I0429 18:59:18.338819   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key: {Name:mk8227aad5a8167db33cc520c292f679014a0ac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.338891   26778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.c5afc2ae
	I0429 18:59:18.338906   26778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.c5afc2ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.52 192.168.39.254]
	I0429 18:59:18.439619   26778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.c5afc2ae ...
	I0429 18:59:18.439652   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.c5afc2ae: {Name:mk221dd4b271f1fdbc86793831f6fbf5460f8563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.439803   26778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.c5afc2ae ...
	I0429 18:59:18.439816   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.c5afc2ae: {Name:mkbb96d6ff3ce7f1d2a0cef765d216fc115a5b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.439889   26778 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.c5afc2ae -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt
	I0429 18:59:18.439978   26778 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.c5afc2ae -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key
	I0429 18:59:18.440043   26778 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key
	I0429 18:59:18.440060   26778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt with IP's: []
	I0429 18:59:18.703344   26778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt ...
	I0429 18:59:18.703376   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt: {Name:mkbac1bb5ff240a8f048a4dd619a346b31d7eb7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.703534   26778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key ...
	I0429 18:59:18.703546   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key: {Name:mk0ac8bd499ced3b4ca1180a4958b246d94e3c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.703614   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 18:59:18.703632   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 18:59:18.703642   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 18:59:18.703658   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 18:59:18.703671   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 18:59:18.703689   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 18:59:18.703702   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 18:59:18.703713   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 18:59:18.703768   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 18:59:18.703801   26778 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 18:59:18.703811   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 18:59:18.703840   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 18:59:18.703864   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 18:59:18.703890   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 18:59:18.703924   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 18:59:18.703951   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:59:18.703965   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem -> /usr/share/ca-certificates/15124.pem
	I0429 18:59:18.703976   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /usr/share/ca-certificates/151242.pem
	I0429 18:59:18.704540   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 18:59:18.738263   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 18:59:18.770530   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 18:59:18.800850   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 18:59:18.832435   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 18:59:18.864115   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 18:59:18.893193   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 18:59:18.923614   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 18:59:18.965251   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 18:59:18.996332   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 18:59:19.027163   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 18:59:19.054628   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 18:59:19.074995   26778 ssh_runner.go:195] Run: openssl version
	I0429 18:59:19.081512   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 18:59:19.093922   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:59:19.099812   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:59:19.099870   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:59:19.107286   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 18:59:19.120482   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 18:59:19.133291   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 18:59:19.140245   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 18:59:19.140301   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 18:59:19.147128   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 18:59:19.159843   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 18:59:19.172499   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 18:59:19.177841   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 18:59:19.177894   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 18:59:19.185051   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 18:59:19.197472   26778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 18:59:19.202888   26778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 18:59:19.202948   26778 kubeadm.go:391] StartCluster: {Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:59:19.203039   26778 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 18:59:19.203081   26778 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 18:59:19.244733   26778 cri.go:89] found id: ""
	I0429 18:59:19.244820   26778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 18:59:19.256733   26778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 18:59:19.268768   26778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 18:59:19.280826   26778 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 18:59:19.280846   26778 kubeadm.go:156] found existing configuration files:
	
	I0429 18:59:19.280900   26778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 18:59:19.292679   26778 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 18:59:19.292743   26778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 18:59:19.304280   26778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 18:59:19.315309   26778 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 18:59:19.315361   26778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 18:59:19.326958   26778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 18:59:19.338190   26778 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 18:59:19.338249   26778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 18:59:19.350650   26778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 18:59:19.361659   26778 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 18:59:19.361746   26778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 18:59:19.372592   26778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 18:59:19.618465   26778 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 18:59:30.028037   26778 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 18:59:30.028108   26778 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 18:59:30.028199   26778 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 18:59:30.028318   26778 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 18:59:30.028407   26778 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 18:59:30.028486   26778 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 18:59:30.030108   26778 out.go:204]   - Generating certificates and keys ...
	I0429 18:59:30.030197   26778 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 18:59:30.030273   26778 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 18:59:30.030370   26778 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 18:59:30.030453   26778 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 18:59:30.030545   26778 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 18:59:30.030607   26778 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 18:59:30.030668   26778 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 18:59:30.030831   26778 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-058855 localhost] and IPs [192.168.39.52 127.0.0.1 ::1]
	I0429 18:59:30.030876   26778 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 18:59:30.030985   26778 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-058855 localhost] and IPs [192.168.39.52 127.0.0.1 ::1]
	I0429 18:59:30.031049   26778 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 18:59:30.031102   26778 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 18:59:30.031141   26778 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 18:59:30.031191   26778 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 18:59:30.031241   26778 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 18:59:30.031288   26778 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 18:59:30.031352   26778 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 18:59:30.031422   26778 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 18:59:30.031480   26778 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 18:59:30.031567   26778 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 18:59:30.031623   26778 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 18:59:30.033419   26778 out.go:204]   - Booting up control plane ...
	I0429 18:59:30.033509   26778 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 18:59:30.033594   26778 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 18:59:30.033675   26778 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 18:59:30.033813   26778 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 18:59:30.033931   26778 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 18:59:30.033984   26778 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 18:59:30.034182   26778 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 18:59:30.034277   26778 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 18:59:30.034376   26778 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.301863ms
	I0429 18:59:30.034492   26778 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 18:59:30.034588   26778 kubeadm.go:309] [api-check] The API server is healthy after 5.911240016s
	I0429 18:59:30.034723   26778 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 18:59:30.034857   26778 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 18:59:30.034933   26778 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 18:59:30.035125   26778 kubeadm.go:309] [mark-control-plane] Marking the node ha-058855 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 18:59:30.035184   26778 kubeadm.go:309] [bootstrap-token] Using token: 87ht6r.s99wm15bpluoriwx
	I0429 18:59:30.036692   26778 out.go:204]   - Configuring RBAC rules ...
	I0429 18:59:30.036773   26778 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 18:59:30.036885   26778 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 18:59:30.037056   26778 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 18:59:30.037226   26778 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 18:59:30.037399   26778 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 18:59:30.037490   26778 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 18:59:30.037651   26778 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 18:59:30.037708   26778 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 18:59:30.037782   26778 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 18:59:30.037794   26778 kubeadm.go:309] 
	I0429 18:59:30.037849   26778 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 18:59:30.037856   26778 kubeadm.go:309] 
	I0429 18:59:30.037942   26778 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 18:59:30.037953   26778 kubeadm.go:309] 
	I0429 18:59:30.038007   26778 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 18:59:30.038087   26778 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 18:59:30.038177   26778 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 18:59:30.038193   26778 kubeadm.go:309] 
	I0429 18:59:30.038265   26778 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 18:59:30.038275   26778 kubeadm.go:309] 
	I0429 18:59:30.038349   26778 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 18:59:30.038360   26778 kubeadm.go:309] 
	I0429 18:59:30.038447   26778 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 18:59:30.038513   26778 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 18:59:30.038610   26778 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 18:59:30.038620   26778 kubeadm.go:309] 
	I0429 18:59:30.038743   26778 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 18:59:30.038817   26778 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 18:59:30.038824   26778 kubeadm.go:309] 
	I0429 18:59:30.038900   26778 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 87ht6r.s99wm15bpluoriwx \
	I0429 18:59:30.038992   26778 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 18:59:30.039012   26778 kubeadm.go:309] 	--control-plane 
	I0429 18:59:30.039018   26778 kubeadm.go:309] 
	I0429 18:59:30.039089   26778 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 18:59:30.039096   26778 kubeadm.go:309] 
	I0429 18:59:30.039161   26778 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 87ht6r.s99wm15bpluoriwx \
	I0429 18:59:30.039266   26778 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 18:59:30.039281   26778 cni.go:84] Creating CNI manager for ""
	I0429 18:59:30.039291   26778 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 18:59:30.040841   26778 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 18:59:30.042158   26778 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 18:59:30.048525   26778 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 18:59:30.048539   26778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 18:59:30.070126   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 18:59:30.430317   26778 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 18:59:30.430412   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:30.430416   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-058855 minikube.k8s.io/updated_at=2024_04_29T18_59_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-058855 minikube.k8s.io/primary=true
	I0429 18:59:30.459757   26778 ops.go:34] apiserver oom_adj: -16
	I0429 18:59:30.608077   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:31.108108   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:31.608142   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:32.108659   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:32.608230   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:33.108096   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:33.608472   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:34.108870   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:34.608987   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:35.108341   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:35.608834   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:36.108993   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:36.608073   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:37.108300   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:37.608224   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:38.108782   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:38.608925   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:39.108264   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:39.608942   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:40.108969   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:40.609022   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:41.108330   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:41.609130   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:42.108697   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:42.608132   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:42.734083   26778 kubeadm.go:1107] duration metric: took 12.303712997s to wait for elevateKubeSystemPrivileges
	W0429 18:59:42.734123   26778 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 18:59:42.734130   26778 kubeadm.go:393] duration metric: took 23.531186894s to StartCluster
	I0429 18:59:42.734151   26778 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:42.734237   26778 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:59:42.735028   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:42.735272   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 18:59:42.735283   26778 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 18:59:42.735309   26778 start.go:240] waiting for startup goroutines ...
	I0429 18:59:42.735325   26778 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 18:59:42.735401   26778 addons.go:69] Setting storage-provisioner=true in profile "ha-058855"
	I0429 18:59:42.735414   26778 addons.go:69] Setting default-storageclass=true in profile "ha-058855"
	I0429 18:59:42.735429   26778 addons.go:234] Setting addon storage-provisioner=true in "ha-058855"
	I0429 18:59:42.735455   26778 host.go:66] Checking if "ha-058855" exists ...
	I0429 18:59:42.735457   26778 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-058855"
	I0429 18:59:42.735833   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:59:42.735868   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:59:42.735953   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:59:42.736148   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:59:42.736196   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:59:42.751199   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40963
	I0429 18:59:42.751292   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36175
	I0429 18:59:42.751664   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:59:42.751666   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:59:42.752185   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:59:42.752208   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:59:42.752213   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:59:42.752224   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:59:42.752551   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:59:42.752599   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:59:42.752731   26778 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 18:59:42.753179   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:59:42.753226   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:59:42.755004   26778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:59:42.755344   26778 kapi.go:59] client config for ha-058855: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.crt", KeyFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key", CAFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 18:59:42.755988   26778 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 18:59:42.756170   26778 addons.go:234] Setting addon default-storageclass=true in "ha-058855"
	I0429 18:59:42.756212   26778 host.go:66] Checking if "ha-058855" exists ...
	I0429 18:59:42.756592   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:59:42.756656   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:59:42.769101   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0429 18:59:42.769648   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:59:42.770223   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:59:42.770247   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:59:42.770593   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:59:42.770769   26778 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 18:59:42.772656   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:42.772805   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I0429 18:59:42.774706   26778 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 18:59:42.773155   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:59:42.776139   26778 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 18:59:42.776157   26778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 18:59:42.776178   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:42.776643   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:59:42.776669   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:59:42.777037   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:59:42.777574   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:59:42.777606   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:59:42.779575   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:42.780083   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:42.780107   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:42.780285   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:42.780540   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:42.780750   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:42.780939   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 18:59:42.792857   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42357
	I0429 18:59:42.793344   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:59:42.793830   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:59:42.793856   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:59:42.794190   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:59:42.794376   26778 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 18:59:42.795993   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:42.796275   26778 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 18:59:42.796289   26778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 18:59:42.796309   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:42.799193   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:42.799566   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:42.799593   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:42.799734   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:42.799905   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:42.800036   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:42.800140   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 18:59:42.875740   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 18:59:42.951882   26778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 18:59:42.961400   26778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 18:59:43.179595   26778 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0429 18:59:43.231394   26778 main.go:141] libmachine: Making call to close driver server
	I0429 18:59:43.231417   26778 main.go:141] libmachine: (ha-058855) Calling .Close
	I0429 18:59:43.231721   26778 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:59:43.231738   26778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:59:43.231746   26778 main.go:141] libmachine: Making call to close driver server
	I0429 18:59:43.231753   26778 main.go:141] libmachine: (ha-058855) Calling .Close
	I0429 18:59:43.232018   26778 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:59:43.232044   26778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:59:43.232050   26778 main.go:141] libmachine: (ha-058855) DBG | Closing plugin on server side
	I0429 18:59:43.232186   26778 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 18:59:43.232197   26778 round_trippers.go:469] Request Headers:
	I0429 18:59:43.232207   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 18:59:43.232215   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 18:59:43.246447   26778 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 18:59:43.247025   26778 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 18:59:43.247040   26778 round_trippers.go:469] Request Headers:
	I0429 18:59:43.247048   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 18:59:43.247055   26778 round_trippers.go:473]     Content-Type: application/json
	I0429 18:59:43.247058   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 18:59:43.249672   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 18:59:43.249819   26778 main.go:141] libmachine: Making call to close driver server
	I0429 18:59:43.249833   26778 main.go:141] libmachine: (ha-058855) Calling .Close
	I0429 18:59:43.250168   26778 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:59:43.250186   26778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:59:43.250220   26778 main.go:141] libmachine: (ha-058855) DBG | Closing plugin on server side
	I0429 18:59:43.426209   26778 main.go:141] libmachine: Making call to close driver server
	I0429 18:59:43.426230   26778 main.go:141] libmachine: (ha-058855) Calling .Close
	I0429 18:59:43.426534   26778 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:59:43.426551   26778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:59:43.426559   26778 main.go:141] libmachine: Making call to close driver server
	I0429 18:59:43.426568   26778 main.go:141] libmachine: (ha-058855) Calling .Close
	I0429 18:59:43.426792   26778 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:59:43.426805   26778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:59:43.426824   26778 main.go:141] libmachine: (ha-058855) DBG | Closing plugin on server side
	I0429 18:59:43.429848   26778 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0429 18:59:43.431302   26778 addons.go:505] duration metric: took 695.978638ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0429 18:59:43.431352   26778 start.go:245] waiting for cluster config update ...
	I0429 18:59:43.431367   26778 start.go:254] writing updated cluster config ...
	I0429 18:59:43.433377   26778 out.go:177] 
	I0429 18:59:43.434775   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:59:43.434879   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 18:59:43.436380   26778 out.go:177] * Starting "ha-058855-m02" control-plane node in "ha-058855" cluster
	I0429 18:59:43.437851   26778 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 18:59:43.437882   26778 cache.go:56] Caching tarball of preloaded images
	I0429 18:59:43.438004   26778 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 18:59:43.438021   26778 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 18:59:43.438126   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 18:59:43.438342   26778 start.go:360] acquireMachinesLock for ha-058855-m02: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 18:59:43.438404   26778 start.go:364] duration metric: took 34.364µs to acquireMachinesLock for "ha-058855-m02"
	I0429 18:59:43.438429   26778 start.go:93] Provisioning new machine with config: &{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 18:59:43.438544   26778 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0429 18:59:43.440136   26778 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 18:59:43.440239   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:59:43.440278   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:59:43.454725   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I0429 18:59:43.455136   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:59:43.455597   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:59:43.455618   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:59:43.455999   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:59:43.456230   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetMachineName
	I0429 18:59:43.456447   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 18:59:43.456633   26778 start.go:159] libmachine.API.Create for "ha-058855" (driver="kvm2")
	I0429 18:59:43.456651   26778 client.go:168] LocalClient.Create starting
	I0429 18:59:43.456749   26778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 18:59:43.456810   26778 main.go:141] libmachine: Decoding PEM data...
	I0429 18:59:43.456831   26778 main.go:141] libmachine: Parsing certificate...
	I0429 18:59:43.456899   26778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 18:59:43.456925   26778 main.go:141] libmachine: Decoding PEM data...
	I0429 18:59:43.456940   26778 main.go:141] libmachine: Parsing certificate...
	I0429 18:59:43.456980   26778 main.go:141] libmachine: Running pre-create checks...
	I0429 18:59:43.456990   26778 main.go:141] libmachine: (ha-058855-m02) Calling .PreCreateCheck
	I0429 18:59:43.457176   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetConfigRaw
	I0429 18:59:43.457573   26778 main.go:141] libmachine: Creating machine...
	I0429 18:59:43.457586   26778 main.go:141] libmachine: (ha-058855-m02) Calling .Create
	I0429 18:59:43.457724   26778 main.go:141] libmachine: (ha-058855-m02) Creating KVM machine...
	I0429 18:59:43.459160   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found existing default KVM network
	I0429 18:59:43.459330   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found existing private KVM network mk-ha-058855
	I0429 18:59:43.459488   26778 main.go:141] libmachine: (ha-058855-m02) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02 ...
	I0429 18:59:43.459509   26778 main.go:141] libmachine: (ha-058855-m02) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 18:59:43.459561   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:43.459456   27419 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:59:43.459654   26778 main.go:141] libmachine: (ha-058855-m02) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 18:59:43.678395   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:43.678266   27419 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa...
	I0429 18:59:43.975573   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:43.975423   27419 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/ha-058855-m02.rawdisk...
	I0429 18:59:43.975604   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Writing magic tar header
	I0429 18:59:43.975619   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Writing SSH key tar header
	I0429 18:59:43.975636   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:43.975546   27419 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02 ...
	I0429 18:59:43.975654   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02
	I0429 18:59:43.975688   26778 main.go:141] libmachine: (ha-058855-m02) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02 (perms=drwx------)
	I0429 18:59:43.975709   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 18:59:43.975725   26778 main.go:141] libmachine: (ha-058855-m02) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 18:59:43.975740   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:59:43.975773   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 18:59:43.975788   26778 main.go:141] libmachine: (ha-058855-m02) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 18:59:43.975808   26778 main.go:141] libmachine: (ha-058855-m02) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 18:59:43.975821   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 18:59:43.975836   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home/jenkins
	I0429 18:59:43.975847   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home
	I0429 18:59:43.975858   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Skipping /home - not owner
	I0429 18:59:43.975874   26778 main.go:141] libmachine: (ha-058855-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 18:59:43.975893   26778 main.go:141] libmachine: (ha-058855-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 18:59:43.975907   26778 main.go:141] libmachine: (ha-058855-m02) Creating domain...
	I0429 18:59:43.976661   26778 main.go:141] libmachine: (ha-058855-m02) define libvirt domain using xml: 
	I0429 18:59:43.976680   26778 main.go:141] libmachine: (ha-058855-m02) <domain type='kvm'>
	I0429 18:59:43.976687   26778 main.go:141] libmachine: (ha-058855-m02)   <name>ha-058855-m02</name>
	I0429 18:59:43.976692   26778 main.go:141] libmachine: (ha-058855-m02)   <memory unit='MiB'>2200</memory>
	I0429 18:59:43.976698   26778 main.go:141] libmachine: (ha-058855-m02)   <vcpu>2</vcpu>
	I0429 18:59:43.976705   26778 main.go:141] libmachine: (ha-058855-m02)   <features>
	I0429 18:59:43.976711   26778 main.go:141] libmachine: (ha-058855-m02)     <acpi/>
	I0429 18:59:43.976715   26778 main.go:141] libmachine: (ha-058855-m02)     <apic/>
	I0429 18:59:43.976723   26778 main.go:141] libmachine: (ha-058855-m02)     <pae/>
	I0429 18:59:43.976744   26778 main.go:141] libmachine: (ha-058855-m02)     
	I0429 18:59:43.976756   26778 main.go:141] libmachine: (ha-058855-m02)   </features>
	I0429 18:59:43.976762   26778 main.go:141] libmachine: (ha-058855-m02)   <cpu mode='host-passthrough'>
	I0429 18:59:43.976769   26778 main.go:141] libmachine: (ha-058855-m02)   
	I0429 18:59:43.976779   26778 main.go:141] libmachine: (ha-058855-m02)   </cpu>
	I0429 18:59:43.976787   26778 main.go:141] libmachine: (ha-058855-m02)   <os>
	I0429 18:59:43.976791   26778 main.go:141] libmachine: (ha-058855-m02)     <type>hvm</type>
	I0429 18:59:43.976796   26778 main.go:141] libmachine: (ha-058855-m02)     <boot dev='cdrom'/>
	I0429 18:59:43.976801   26778 main.go:141] libmachine: (ha-058855-m02)     <boot dev='hd'/>
	I0429 18:59:43.976808   26778 main.go:141] libmachine: (ha-058855-m02)     <bootmenu enable='no'/>
	I0429 18:59:43.976819   26778 main.go:141] libmachine: (ha-058855-m02)   </os>
	I0429 18:59:43.976843   26778 main.go:141] libmachine: (ha-058855-m02)   <devices>
	I0429 18:59:43.976859   26778 main.go:141] libmachine: (ha-058855-m02)     <disk type='file' device='cdrom'>
	I0429 18:59:43.976870   26778 main.go:141] libmachine: (ha-058855-m02)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/boot2docker.iso'/>
	I0429 18:59:43.976887   26778 main.go:141] libmachine: (ha-058855-m02)       <target dev='hdc' bus='scsi'/>
	I0429 18:59:43.976896   26778 main.go:141] libmachine: (ha-058855-m02)       <readonly/>
	I0429 18:59:43.976904   26778 main.go:141] libmachine: (ha-058855-m02)     </disk>
	I0429 18:59:43.976931   26778 main.go:141] libmachine: (ha-058855-m02)     <disk type='file' device='disk'>
	I0429 18:59:43.976965   26778 main.go:141] libmachine: (ha-058855-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 18:59:43.976982   26778 main.go:141] libmachine: (ha-058855-m02)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/ha-058855-m02.rawdisk'/>
	I0429 18:59:43.976992   26778 main.go:141] libmachine: (ha-058855-m02)       <target dev='hda' bus='virtio'/>
	I0429 18:59:43.977002   26778 main.go:141] libmachine: (ha-058855-m02)     </disk>
	I0429 18:59:43.977010   26778 main.go:141] libmachine: (ha-058855-m02)     <interface type='network'>
	I0429 18:59:43.977022   26778 main.go:141] libmachine: (ha-058855-m02)       <source network='mk-ha-058855'/>
	I0429 18:59:43.977031   26778 main.go:141] libmachine: (ha-058855-m02)       <model type='virtio'/>
	I0429 18:59:43.977036   26778 main.go:141] libmachine: (ha-058855-m02)     </interface>
	I0429 18:59:43.977047   26778 main.go:141] libmachine: (ha-058855-m02)     <interface type='network'>
	I0429 18:59:43.977061   26778 main.go:141] libmachine: (ha-058855-m02)       <source network='default'/>
	I0429 18:59:43.977077   26778 main.go:141] libmachine: (ha-058855-m02)       <model type='virtio'/>
	I0429 18:59:43.977090   26778 main.go:141] libmachine: (ha-058855-m02)     </interface>
	I0429 18:59:43.977101   26778 main.go:141] libmachine: (ha-058855-m02)     <serial type='pty'>
	I0429 18:59:43.977110   26778 main.go:141] libmachine: (ha-058855-m02)       <target port='0'/>
	I0429 18:59:43.977120   26778 main.go:141] libmachine: (ha-058855-m02)     </serial>
	I0429 18:59:43.977129   26778 main.go:141] libmachine: (ha-058855-m02)     <console type='pty'>
	I0429 18:59:43.977144   26778 main.go:141] libmachine: (ha-058855-m02)       <target type='serial' port='0'/>
	I0429 18:59:43.977159   26778 main.go:141] libmachine: (ha-058855-m02)     </console>
	I0429 18:59:43.977171   26778 main.go:141] libmachine: (ha-058855-m02)     <rng model='virtio'>
	I0429 18:59:43.977190   26778 main.go:141] libmachine: (ha-058855-m02)       <backend model='random'>/dev/random</backend>
	I0429 18:59:43.977200   26778 main.go:141] libmachine: (ha-058855-m02)     </rng>
	I0429 18:59:43.977208   26778 main.go:141] libmachine: (ha-058855-m02)     
	I0429 18:59:43.977216   26778 main.go:141] libmachine: (ha-058855-m02)     
	I0429 18:59:43.977222   26778 main.go:141] libmachine: (ha-058855-m02)   </devices>
	I0429 18:59:43.977232   26778 main.go:141] libmachine: (ha-058855-m02) </domain>
	I0429 18:59:43.977239   26778 main.go:141] libmachine: (ha-058855-m02) 
	I0429 18:59:43.983852   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:40:82:e8 in network default
	I0429 18:59:43.984371   26778 main.go:141] libmachine: (ha-058855-m02) Ensuring networks are active...
	I0429 18:59:43.984389   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:43.985119   26778 main.go:141] libmachine: (ha-058855-m02) Ensuring network default is active
	I0429 18:59:43.985436   26778 main.go:141] libmachine: (ha-058855-m02) Ensuring network mk-ha-058855 is active
	I0429 18:59:43.985884   26778 main.go:141] libmachine: (ha-058855-m02) Getting domain xml...
	I0429 18:59:43.986602   26778 main.go:141] libmachine: (ha-058855-m02) Creating domain...
	I0429 18:59:45.231264   26778 main.go:141] libmachine: (ha-058855-m02) Waiting to get IP...
	I0429 18:59:45.232027   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:45.232439   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:45.232479   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:45.232435   27419 retry.go:31] will retry after 288.019954ms: waiting for machine to come up
	I0429 18:59:45.522141   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:45.522695   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:45.522720   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:45.522669   27419 retry.go:31] will retry after 341.352877ms: waiting for machine to come up
	I0429 18:59:45.865224   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:45.865742   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:45.865772   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:45.865704   27419 retry.go:31] will retry after 428.945282ms: waiting for machine to come up
	I0429 18:59:46.296241   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:46.296599   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:46.296619   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:46.296581   27419 retry.go:31] will retry after 543.34325ms: waiting for machine to come up
	I0429 18:59:46.841376   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:46.841802   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:46.841829   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:46.841759   27419 retry.go:31] will retry after 762.276747ms: waiting for machine to come up
	I0429 18:59:47.605680   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:47.606106   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:47.606134   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:47.606050   27419 retry.go:31] will retry after 718.412828ms: waiting for machine to come up
	I0429 18:59:48.325846   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:48.326280   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:48.326310   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:48.326230   27419 retry.go:31] will retry after 882.907083ms: waiting for machine to come up
	I0429 18:59:49.210629   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:49.211042   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:49.211065   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:49.211010   27419 retry.go:31] will retry after 1.274425388s: waiting for machine to come up
	I0429 18:59:50.487472   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:50.487829   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:50.487859   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:50.487785   27419 retry.go:31] will retry after 1.613104504s: waiting for machine to come up
	I0429 18:59:52.103213   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:52.103586   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:52.103617   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:52.103571   27419 retry.go:31] will retry after 2.032138772s: waiting for machine to come up
	I0429 18:59:54.137486   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:54.137918   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:54.137946   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:54.137874   27419 retry.go:31] will retry after 2.860217313s: waiting for machine to come up
	I0429 18:59:57.000098   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:57.000554   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:57.000591   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:57.000478   27419 retry.go:31] will retry after 3.364383116s: waiting for machine to come up
	I0429 19:00:00.366964   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:00.367359   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 19:00:00.367385   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 19:00:00.367324   27419 retry.go:31] will retry after 3.364915441s: waiting for machine to come up
	I0429 19:00:03.733964   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:03.734448   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 19:00:03.734474   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 19:00:03.734425   27419 retry.go:31] will retry after 4.96010853s: waiting for machine to come up
	I0429 19:00:08.695586   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:08.696062   26778 main.go:141] libmachine: (ha-058855-m02) Found IP for machine: 192.168.39.27
	I0429 19:00:08.696093   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has current primary IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:08.696101   26778 main.go:141] libmachine: (ha-058855-m02) Reserving static IP address...
	I0429 19:00:08.696665   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find host DHCP lease matching {name: "ha-058855-m02", mac: "52:54:00:98:81:20", ip: "192.168.39.27"} in network mk-ha-058855
	I0429 19:00:08.770639   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Getting to WaitForSSH function...
	I0429 19:00:08.770668   26778 main.go:141] libmachine: (ha-058855-m02) Reserved static IP address: 192.168.39.27
	I0429 19:00:08.770680   26778 main.go:141] libmachine: (ha-058855-m02) Waiting for SSH to be available...
	I0429 19:00:08.773095   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:08.773382   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855
	I0429 19:00:08.773414   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find defined IP address of network mk-ha-058855 interface with MAC address 52:54:00:98:81:20
	I0429 19:00:08.773586   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Using SSH client type: external
	I0429 19:00:08.773613   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa (-rw-------)
	I0429 19:00:08.773656   26778 main.go:141] libmachine: (ha-058855-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:00:08.773672   26778 main.go:141] libmachine: (ha-058855-m02) DBG | About to run SSH command:
	I0429 19:00:08.773715   26778 main.go:141] libmachine: (ha-058855-m02) DBG | exit 0
	I0429 19:00:08.777252   26778 main.go:141] libmachine: (ha-058855-m02) DBG | SSH cmd err, output: exit status 255: 
	I0429 19:00:08.777286   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0429 19:00:08.777298   26778 main.go:141] libmachine: (ha-058855-m02) DBG | command : exit 0
	I0429 19:00:08.777306   26778 main.go:141] libmachine: (ha-058855-m02) DBG | err     : exit status 255
	I0429 19:00:08.777317   26778 main.go:141] libmachine: (ha-058855-m02) DBG | output  : 
	I0429 19:00:11.779414   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Getting to WaitForSSH function...
	I0429 19:00:11.781786   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:11.782111   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:11.782155   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:11.782277   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Using SSH client type: external
	I0429 19:00:11.782302   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa (-rw-------)
	I0429 19:00:11.782340   26778 main.go:141] libmachine: (ha-058855-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:00:11.782361   26778 main.go:141] libmachine: (ha-058855-m02) DBG | About to run SSH command:
	I0429 19:00:11.782390   26778 main.go:141] libmachine: (ha-058855-m02) DBG | exit 0
	I0429 19:00:11.915120   26778 main.go:141] libmachine: (ha-058855-m02) DBG | SSH cmd err, output: <nil>: 
	I0429 19:00:11.915290   26778 main.go:141] libmachine: (ha-058855-m02) KVM machine creation complete!
	I0429 19:00:11.915625   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetConfigRaw
	I0429 19:00:11.916168   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:11.916348   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:11.916548   26778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 19:00:11.916565   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:00:11.917861   26778 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 19:00:11.917899   26778 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 19:00:11.917909   26778 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 19:00:11.917921   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:11.919969   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:11.920293   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:11.920317   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:11.920482   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:11.920697   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:11.920833   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:11.920954   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:11.921131   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:00:11.921367   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0429 19:00:11.921385   26778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 19:00:12.037872   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:00:12.037900   26778 main.go:141] libmachine: Detecting the provisioner...
	I0429 19:00:12.037908   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:12.040538   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.040908   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.040953   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.041100   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:12.041312   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.041461   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.041633   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:12.041790   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:00:12.041952   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0429 19:00:12.041965   26778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 19:00:12.159783   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 19:00:12.159873   26778 main.go:141] libmachine: found compatible host: buildroot
	I0429 19:00:12.159890   26778 main.go:141] libmachine: Provisioning with buildroot...
	I0429 19:00:12.159901   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetMachineName
	I0429 19:00:12.160170   26778 buildroot.go:166] provisioning hostname "ha-058855-m02"
	I0429 19:00:12.160198   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetMachineName
	I0429 19:00:12.160380   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:12.162841   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.163184   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.163232   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.163330   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:12.163495   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.163649   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.163763   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:12.163916   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:00:12.164093   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0429 19:00:12.164106   26778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-058855-m02 && echo "ha-058855-m02" | sudo tee /etc/hostname
	I0429 19:00:12.294307   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-058855-m02
	
	I0429 19:00:12.294346   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:12.297012   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.297368   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.297402   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.297565   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:12.297754   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.297888   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.298041   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:12.298207   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:00:12.298414   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0429 19:00:12.298433   26778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-058855-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-058855-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-058855-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:00:12.427831   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:00:12.427863   26778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:00:12.427883   26778 buildroot.go:174] setting up certificates
	I0429 19:00:12.427898   26778 provision.go:84] configureAuth start
	I0429 19:00:12.427914   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetMachineName
	I0429 19:00:12.428188   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:00:12.430891   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.431294   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.431325   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.431457   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:12.433562   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.433989   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.434014   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.434171   26778 provision.go:143] copyHostCerts
	I0429 19:00:12.434198   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:00:12.434230   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:00:12.434245   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:00:12.434321   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:00:12.434406   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:00:12.434425   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:00:12.434434   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:00:12.434458   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:00:12.434545   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:00:12.434575   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:00:12.434583   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:00:12.434609   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:00:12.434666   26778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.ha-058855-m02 san=[127.0.0.1 192.168.39.27 ha-058855-m02 localhost minikube]
	I0429 19:00:12.570018   26778 provision.go:177] copyRemoteCerts
	I0429 19:00:12.570117   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:00:12.570141   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:12.572743   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.573042   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.573072   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.573219   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:12.573405   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.573576   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:12.573695   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	I0429 19:00:12.661515   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 19:00:12.661585   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:00:12.689766   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 19:00:12.689834   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 19:00:12.720381   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 19:00:12.720444   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 19:00:12.749899   26778 provision.go:87] duration metric: took 321.986297ms to configureAuth
	I0429 19:00:12.749929   26778 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:00:12.750132   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:00:12.750202   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:12.752958   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.753340   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.753365   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.753526   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:12.753732   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.753905   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.754047   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:12.754233   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:00:12.754391   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0429 19:00:12.754405   26778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:00:13.064704   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:00:13.064728   26778 main.go:141] libmachine: Checking connection to Docker...
	I0429 19:00:13.064735   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetURL
	I0429 19:00:13.066069   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Using libvirt version 6000000
	I0429 19:00:13.068255   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.068591   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.068622   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.068805   26778 main.go:141] libmachine: Docker is up and running!
	I0429 19:00:13.068821   26778 main.go:141] libmachine: Reticulating splines...
	I0429 19:00:13.068827   26778 client.go:171] duration metric: took 29.612166123s to LocalClient.Create
	I0429 19:00:13.068848   26778 start.go:167] duration metric: took 29.612214179s to libmachine.API.Create "ha-058855"
	I0429 19:00:13.068862   26778 start.go:293] postStartSetup for "ha-058855-m02" (driver="kvm2")
	I0429 19:00:13.068872   26778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:00:13.068898   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:13.069242   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:00:13.069284   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:13.072032   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.072463   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.072489   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.072654   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:13.072802   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:13.072958   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:13.073162   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	I0429 19:00:13.161599   26778 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:00:13.166655   26778 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:00:13.166685   26778 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:00:13.166772   26778 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:00:13.166846   26778 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:00:13.166856   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /etc/ssl/certs/151242.pem
	I0429 19:00:13.166959   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:00:13.177727   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:00:13.206240   26778 start.go:296] duration metric: took 137.364447ms for postStartSetup
	I0429 19:00:13.206288   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetConfigRaw
	I0429 19:00:13.206821   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:00:13.209346   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.209675   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.209706   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.209938   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 19:00:13.210151   26778 start.go:128] duration metric: took 29.77159513s to createHost
	I0429 19:00:13.210175   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:13.212467   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.212802   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.212825   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.212971   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:13.213134   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:13.213283   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:13.213439   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:13.213593   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:00:13.213741   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0429 19:00:13.213751   26778 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:00:13.332585   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714417213.321381911
	
	I0429 19:00:13.332610   26778 fix.go:216] guest clock: 1714417213.321381911
	I0429 19:00:13.332620   26778 fix.go:229] Guest: 2024-04-29 19:00:13.321381911 +0000 UTC Remote: 2024-04-29 19:00:13.210163606 +0000 UTC m=+87.275376480 (delta=111.218305ms)
	I0429 19:00:13.332635   26778 fix.go:200] guest clock delta is within tolerance: 111.218305ms
	I0429 19:00:13.332640   26778 start.go:83] releasing machines lock for "ha-058855-m02", held for 29.89422449s
	I0429 19:00:13.332656   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:13.332892   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:00:13.335629   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.335965   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.335990   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.338552   26778 out.go:177] * Found network options:
	I0429 19:00:13.339978   26778 out.go:177]   - NO_PROXY=192.168.39.52
	W0429 19:00:13.341305   26778 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:00:13.341353   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:13.342010   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:13.342226   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:13.342337   26778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:00:13.342375   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	W0429 19:00:13.342462   26778 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:00:13.342552   26778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:00:13.342576   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:13.345041   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.345255   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.345456   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.345486   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.345584   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:13.345730   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.345740   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:13.345751   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.345917   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:13.345928   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:13.346177   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	I0429 19:00:13.346238   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:13.346375   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:13.346536   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	I0429 19:00:13.594081   26778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:00:13.601628   26778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:00:13.601706   26778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:00:13.622685   26778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:00:13.622718   26778 start.go:494] detecting cgroup driver to use...
	I0429 19:00:13.622789   26778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:00:13.641928   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:00:13.657761   26778 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:00:13.657821   26778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:00:13.673744   26778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:00:13.689083   26778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:00:13.828789   26778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:00:13.991339   26778 docker.go:233] disabling docker service ...
	I0429 19:00:13.991432   26778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:00:14.008421   26778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:00:14.022861   26778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:00:14.166301   26778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:00:14.283457   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:00:14.299275   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:00:14.324665   26778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 19:00:14.324726   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.336852   26778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:00:14.336908   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.348539   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.361198   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.373518   26778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:00:14.385123   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.396271   26778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.415744   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.426968   26778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:00:14.436888   26778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 19:00:14.436949   26778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 19:00:14.453075   26778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:00:14.466893   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:00:14.597651   26778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:00:14.757924   26778 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:00:14.757990   26778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:00:14.763349   26778 start.go:562] Will wait 60s for crictl version
	I0429 19:00:14.763396   26778 ssh_runner.go:195] Run: which crictl
	I0429 19:00:14.767450   26778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:00:14.818781   26778 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:00:14.818869   26778 ssh_runner.go:195] Run: crio --version
	I0429 19:00:14.850335   26778 ssh_runner.go:195] Run: crio --version
	I0429 19:00:14.886670   26778 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 19:00:14.888365   26778 out.go:177]   - env NO_PROXY=192.168.39.52
	I0429 19:00:14.889746   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:00:14.892341   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:14.892741   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:14.892771   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:14.892958   26778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 19:00:14.897839   26778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:00:14.912933   26778 mustload.go:65] Loading cluster: ha-058855
	I0429 19:00:14.913133   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:00:14.913423   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:00:14.913460   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:00:14.928295   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39599
	I0429 19:00:14.928783   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:00:14.929310   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:00:14.929336   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:00:14.929633   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:00:14.929869   26778 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:00:14.931253   26778 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:00:14.931550   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:00:14.931582   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:00:14.945834   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45259
	I0429 19:00:14.946287   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:00:14.946699   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:00:14.946738   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:00:14.947037   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:00:14.947206   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:00:14.947377   26778 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855 for IP: 192.168.39.27
	I0429 19:00:14.947395   26778 certs.go:194] generating shared ca certs ...
	I0429 19:00:14.947411   26778 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:00:14.947572   26778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:00:14.947621   26778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:00:14.947636   26778 certs.go:256] generating profile certs ...
	I0429 19:00:14.947749   26778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key
	I0429 19:00:14.947783   26778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.92ecc576
	I0429 19:00:14.947803   26778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.92ecc576 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.52 192.168.39.27 192.168.39.254]
	I0429 19:00:15.294884   26778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.92ecc576 ...
	I0429 19:00:15.294913   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.92ecc576: {Name:mkb034de2f41ca35c303234e6f802403c57586ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:00:15.295107   26778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.92ecc576 ...
	I0429 19:00:15.295125   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.92ecc576: {Name:mkbe37529d1b277fc4a208f5b0f89e39776fabc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:00:15.295230   26778 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.92ecc576 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt
	I0429 19:00:15.295401   26778 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.92ecc576 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key
	I0429 19:00:15.295596   26778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key
	I0429 19:00:15.295619   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:00:15.295639   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:00:15.295657   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:00:15.295677   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:00:15.295697   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:00:15.295710   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:00:15.295724   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:00:15.295736   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:00:15.295785   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:00:15.295814   26778 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:00:15.295824   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:00:15.295844   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:00:15.295866   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:00:15.295921   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:00:15.295967   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:00:15.296012   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /usr/share/ca-certificates/151242.pem
	I0429 19:00:15.296026   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:00:15.296039   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem -> /usr/share/ca-certificates/15124.pem
	I0429 19:00:15.296067   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:00:15.298956   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:00:15.299306   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:00:15.299341   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:00:15.299512   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:00:15.299685   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:00:15.299862   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:00:15.300028   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:00:15.378460   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 19:00:15.385333   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 19:00:15.400757   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 19:00:15.405965   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0429 19:00:15.420574   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 19:00:15.426174   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 19:00:15.439222   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 19:00:15.448132   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0429 19:00:15.464834   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 19:00:15.469819   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 19:00:15.482125   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 19:00:15.487531   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 19:00:15.499235   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:00:15.529046   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:00:15.555896   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:00:15.590035   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:00:15.615905   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0429 19:00:15.643601   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 19:00:15.671197   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:00:15.697565   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 19:00:15.723401   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:00:15.748529   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:00:15.776188   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:00:15.802620   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 19:00:15.820770   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0429 19:00:15.839189   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 19:00:15.858797   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0429 19:00:15.879490   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 19:00:15.901632   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 19:00:15.922300   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 19:00:15.942601   26778 ssh_runner.go:195] Run: openssl version
	I0429 19:00:15.949452   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:00:15.963818   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:00:15.969239   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:00:15.969303   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:00:15.975940   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:00:15.989823   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:00:16.003500   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:00:16.008876   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:00:16.008935   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:00:16.015404   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:00:16.028915   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:00:16.042500   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:00:16.047660   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:00:16.047719   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:00:16.053871   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:00:16.066750   26778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:00:16.071234   26778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:00:16.071277   26778 kubeadm.go:928] updating node {m02 192.168.39.27 8443 v1.30.0 crio true true} ...
	I0429 19:00:16.071353   26778 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-058855-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:00:16.071377   26778 kube-vip.go:115] generating kube-vip config ...
	I0429 19:00:16.071407   26778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 19:00:16.089480   26778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 19:00:16.089553   26778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 19:00:16.089613   26778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:00:16.100670   26778 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 19:00:16.100725   26778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 19:00:16.111891   26778 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 19:00:16.111911   26778 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0429 19:00:16.111918   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:00:16.111918   26778 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0429 19:00:16.111989   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:00:16.118185   26778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 19:00:16.118221   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 19:00:50.972503   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:00:50.972586   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:00:50.978615   26778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 19:00:50.978656   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 19:01:25.243523   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:01:25.262125   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:01:25.262248   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:01:25.267598   26778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 19:01:25.267637   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 19:01:25.744249   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 19:01:25.756367   26778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0429 19:01:25.776461   26778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:01:25.795913   26778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0429 19:01:25.815558   26778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 19:01:25.820116   26778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:01:25.835422   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:01:25.986295   26778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:01:26.006270   26778 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:01:26.006725   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:01:26.006777   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:01:26.021972   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I0429 19:01:26.022416   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:01:26.022919   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:01:26.022940   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:01:26.023318   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:01:26.023514   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:01:26.023650   26778 start.go:316] joinCluster: &{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:01:26.023758   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 19:01:26.023781   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:01:26.027191   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:01:26.027634   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:01:26.027664   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:01:26.027807   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:01:26.027976   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:01:26.028166   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:01:26.028361   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:01:26.227534   26778 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:01:26.227586   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token y25o3g.nddjkwofticnjyl8 --discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-058855-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443"
	I0429 19:01:50.314194   26778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token y25o3g.nddjkwofticnjyl8 --discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-058855-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443": (24.086581898s)
	I0429 19:01:50.314231   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 19:01:50.976596   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-058855-m02 minikube.k8s.io/updated_at=2024_04_29T19_01_50_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-058855 minikube.k8s.io/primary=false
	I0429 19:01:51.156612   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-058855-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 19:01:51.320453   26778 start.go:318] duration metric: took 25.29679628s to joinCluster
	I0429 19:01:51.320565   26778 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:01:51.321995   26778 out.go:177] * Verifying Kubernetes components...
	I0429 19:01:51.320820   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:01:51.323237   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:01:51.613558   26778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:01:51.646971   26778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:01:51.647408   26778 kapi.go:59] client config for ha-058855: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.crt", KeyFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key", CAFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 19:01:51.647498   26778 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.52:8443
	I0429 19:01:51.647802   26778 node_ready.go:35] waiting up to 6m0s for node "ha-058855-m02" to be "Ready" ...
	I0429 19:01:51.647944   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:51.647957   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:51.647969   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:51.647980   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:51.660139   26778 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 19:01:52.148893   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:52.148914   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:52.148921   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:52.148925   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:52.152921   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:52.648346   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:52.648373   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:52.648383   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:52.648388   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:52.682535   26778 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0429 19:01:53.148627   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:53.148653   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:53.148666   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:53.148683   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:53.152363   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:53.648121   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:53.648146   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:53.648158   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:53.648166   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:53.652774   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:53.653410   26778 node_ready.go:53] node "ha-058855-m02" has status "Ready":"False"
	I0429 19:01:54.148355   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:54.148378   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:54.148390   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:54.148397   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:54.153047   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:54.648266   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:54.648287   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:54.648294   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:54.648299   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:54.652522   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:55.148546   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:55.148582   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:55.148590   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:55.148596   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:55.152445   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:55.648760   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:55.648780   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:55.648788   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:55.648792   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:55.652343   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:56.148037   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:56.148070   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:56.148093   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:56.148099   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:56.152206   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:56.152907   26778 node_ready.go:53] node "ha-058855-m02" has status "Ready":"False"
	I0429 19:01:56.648187   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:56.648210   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:56.648219   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:56.648224   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:56.652324   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:57.148479   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:57.148504   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:57.148516   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:57.148524   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:57.155633   26778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:01:57.648853   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:57.648877   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:57.648885   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:57.648890   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:57.653182   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:58.148268   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:58.148292   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:58.148320   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:58.148324   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:58.152128   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:58.648691   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:58.648713   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:58.648722   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:58.648726   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:58.652637   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:58.653456   26778 node_ready.go:53] node "ha-058855-m02" has status "Ready":"False"
	I0429 19:01:59.148550   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:59.148580   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.148592   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.148610   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.153627   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:59.155026   26778 node_ready.go:49] node "ha-058855-m02" has status "Ready":"True"
	I0429 19:01:59.155052   26778 node_ready.go:38] duration metric: took 7.507203783s for node "ha-058855-m02" to be "Ready" ...
	I0429 19:01:59.155064   26778 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:01:59.155159   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:01:59.155173   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.155183   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.155189   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.161694   26778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:01:59.170047   26778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bbq9x" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.170148   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bbq9x
	I0429 19:01:59.170160   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.170167   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.170172   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.173879   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:59.174619   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:01:59.174637   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.174644   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.174648   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.179644   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:59.180979   26778 pod_ready.go:92] pod "coredns-7db6d8ff4d-bbq9x" in "kube-system" namespace has status "Ready":"True"
	I0429 19:01:59.180996   26778 pod_ready.go:81] duration metric: took 10.912717ms for pod "coredns-7db6d8ff4d-bbq9x" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.181005   26778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-njch8" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.181058   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-njch8
	I0429 19:01:59.181068   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.181080   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.181090   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.183986   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:01:59.184686   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:01:59.184701   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.184708   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.184712   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.187897   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:59.188624   26778 pod_ready.go:92] pod "coredns-7db6d8ff4d-njch8" in "kube-system" namespace has status "Ready":"True"
	I0429 19:01:59.188645   26778 pod_ready.go:81] duration metric: took 7.633481ms for pod "coredns-7db6d8ff4d-njch8" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.188658   26778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.188725   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855
	I0429 19:01:59.188737   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.188746   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.188756   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.191528   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:01:59.192263   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:01:59.192280   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.192287   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.192290   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.194727   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:01:59.195387   26778 pod_ready.go:92] pod "etcd-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:01:59.195407   26778 pod_ready.go:81] duration metric: took 6.741642ms for pod "etcd-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.195415   26778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.195460   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:01:59.195467   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.195474   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.195480   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.198388   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:01:59.199048   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:59.199063   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.199070   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.199074   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.201636   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:01:59.695652   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:01:59.695677   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.695685   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.695689   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.699159   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:59.699883   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:59.699901   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.699912   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.699919   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.702751   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:00.195579   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:00.195604   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:00.195614   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:00.195620   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:00.201107   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:02:00.202014   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:00.202029   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:00.202036   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:00.202040   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:00.205410   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:00.696380   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:00.696405   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:00.696413   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:00.696416   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:00.700536   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:00.701517   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:00.701536   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:00.701544   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:00.701547   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:00.704698   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:01.195911   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:01.195940   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:01.195951   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:01.195956   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:01.200235   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:01.201183   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:01.201199   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:01.201204   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:01.201208   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:01.204245   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:01.205086   26778 pod_ready.go:102] pod "etcd-ha-058855-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 19:02:01.696448   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:01.696470   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:01.696477   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:01.696482   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:01.703231   26778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:02:01.704706   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:01.704723   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:01.704739   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:01.704747   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:01.707602   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:02.195798   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:02.195830   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:02.195839   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:02.195844   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:02.200740   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:02.201671   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:02.201687   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:02.201694   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:02.201699   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:02.205319   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:02.696277   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:02.696300   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:02.696308   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:02.696313   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:02.701170   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:02.702156   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:02.702171   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:02.702178   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:02.702183   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:02.705439   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:03.195582   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:03.195612   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:03.195626   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:03.195634   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:03.199601   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:03.200597   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:03.200613   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:03.200620   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:03.200626   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:03.203604   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:03.696082   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:03.696102   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:03.696111   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:03.696118   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:03.700252   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:03.701074   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:03.701089   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:03.701100   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:03.701106   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:03.703913   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:03.704576   26778 pod_ready.go:102] pod "etcd-ha-058855-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 19:02:04.196005   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:04.196033   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:04.196040   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:04.196044   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:04.199615   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:04.200609   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:04.200624   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:04.200632   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:04.200636   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:04.203723   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:04.695662   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:04.695687   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:04.695697   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:04.695702   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:04.699413   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:04.700424   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:04.700439   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:04.700447   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:04.700453   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:04.703708   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.195698   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:05.195723   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.195734   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.195743   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.199593   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.200231   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:05.200250   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.200260   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.200265   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.203686   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.696469   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:05.696498   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.696509   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.696514   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.701346   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:05.702785   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:05.702800   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.702807   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.702810   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.706019   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.706675   26778 pod_ready.go:92] pod "etcd-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:05.706701   26778 pod_ready.go:81] duration metric: took 6.511279394s for pod "etcd-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.706713   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.706763   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-058855
	I0429 19:02:05.706770   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.706777   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.706780   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.710127   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.711122   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:02:05.711141   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.711148   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.711152   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.715021   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.715759   26778 pod_ready.go:92] pod "kube-apiserver-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:05.715781   26778 pod_ready.go:81] duration metric: took 9.06116ms for pod "kube-apiserver-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.715793   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.715851   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-058855-m02
	I0429 19:02:05.715858   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.715869   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.715875   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.718816   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:05.719499   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:05.719514   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.719519   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.719522   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.721882   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:05.722413   26778 pod_ready.go:92] pod "kube-apiserver-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:05.722429   26778 pod_ready.go:81] duration metric: took 6.62945ms for pod "kube-apiserver-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.722438   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.722480   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855
	I0429 19:02:05.722488   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.722494   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.722499   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.725036   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:05.725874   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:02:05.725889   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.725899   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.725907   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.728522   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:05.729112   26778 pod_ready.go:92] pod "kube-controller-manager-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:05.729130   26778 pod_ready.go:81] duration metric: took 6.685135ms for pod "kube-controller-manager-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.729142   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.749493   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855-m02
	I0429 19:02:05.749525   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.749535   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.749541   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.753178   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.948977   26778 request.go:629] Waited for 194.998438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:05.949032   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:05.949037   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.949045   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.949049   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.952834   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.953786   26778 pod_ready.go:92] pod "kube-controller-manager-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:05.953810   26778 pod_ready.go:81] duration metric: took 224.658701ms for pod "kube-controller-manager-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.953824   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nz2rv" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:06.149131   26778 request.go:629] Waited for 195.235479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz2rv
	I0429 19:02:06.149204   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz2rv
	I0429 19:02:06.149211   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:06.149222   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:06.149230   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:06.156135   26778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:02:06.349304   26778 request.go:629] Waited for 192.378541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:06.349377   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:06.349382   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:06.349389   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:06.349394   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:06.353882   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:06.354770   26778 pod_ready.go:92] pod "kube-proxy-nz2rv" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:06.354787   26778 pod_ready.go:81] duration metric: took 400.955401ms for pod "kube-proxy-nz2rv" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:06.354796   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xldlc" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:06.548928   26778 request.go:629] Waited for 194.054332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xldlc
	I0429 19:02:06.548990   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xldlc
	I0429 19:02:06.548996   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:06.549004   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:06.549007   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:06.552981   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:06.749232   26778 request.go:629] Waited for 195.3669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:02:06.749312   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:02:06.749323   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:06.749333   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:06.749342   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:06.753364   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:06.754015   26778 pod_ready.go:92] pod "kube-proxy-xldlc" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:06.754034   26778 pod_ready.go:81] duration metric: took 399.232401ms for pod "kube-proxy-xldlc" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:06.754043   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:06.949226   26778 request.go:629] Waited for 195.086098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855
	I0429 19:02:06.949283   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855
	I0429 19:02:06.949288   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:06.949294   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:06.949297   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:06.952920   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:07.148961   26778 request.go:629] Waited for 195.205382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:02:07.149012   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:02:07.149018   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:07.149028   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:07.149035   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:07.153185   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:07.154248   26778 pod_ready.go:92] pod "kube-scheduler-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:07.154271   26778 pod_ready.go:81] duration metric: took 400.222276ms for pod "kube-scheduler-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:07.154281   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:07.349255   26778 request.go:629] Waited for 194.918313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855-m02
	I0429 19:02:07.349345   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855-m02
	I0429 19:02:07.349357   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:07.349368   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:07.349377   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:07.353193   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:07.549268   26778 request.go:629] Waited for 195.21446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:07.549331   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:07.549336   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:07.549343   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:07.549348   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:07.552812   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:07.553419   26778 pod_ready.go:92] pod "kube-scheduler-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:07.553439   26778 pod_ready.go:81] duration metric: took 399.150386ms for pod "kube-scheduler-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:07.553449   26778 pod_ready.go:38] duration metric: took 8.398363668s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:02:07.553468   26778 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:02:07.553523   26778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:02:07.574578   26778 api_server.go:72] duration metric: took 16.253972211s to wait for apiserver process to appear ...
	I0429 19:02:07.574610   26778 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:02:07.574634   26778 api_server.go:253] Checking apiserver healthz at https://192.168.39.52:8443/healthz ...
	I0429 19:02:07.582917   26778 api_server.go:279] https://192.168.39.52:8443/healthz returned 200:
	ok
	I0429 19:02:07.582991   26778 round_trippers.go:463] GET https://192.168.39.52:8443/version
	I0429 19:02:07.582998   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:07.583008   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:07.583013   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:07.585045   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:07.585480   26778 api_server.go:141] control plane version: v1.30.0
	I0429 19:02:07.585507   26778 api_server.go:131] duration metric: took 10.887919ms to wait for apiserver health ...
	I0429 19:02:07.585517   26778 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:02:07.748891   26778 request.go:629] Waited for 163.29562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:02:07.748977   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:02:07.748983   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:07.748990   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:07.748999   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:07.755331   26778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:02:07.761345   26778 system_pods.go:59] 17 kube-system pods found
	I0429 19:02:07.761393   26778 system_pods.go:61] "coredns-7db6d8ff4d-bbq9x" [a016fbf8-4a91-4f2f-97da-44b6e2195885] Running
	I0429 19:02:07.761402   26778 system_pods.go:61] "coredns-7db6d8ff4d-njch8" [823d223d-f7bd-4b9c-bdd9-8d0ae063d449] Running
	I0429 19:02:07.761412   26778 system_pods.go:61] "etcd-ha-058855" [a7e579b9-771a-4bb2-819b-a98848f52b09] Running
	I0429 19:02:07.761418   26778 system_pods.go:61] "etcd-ha-058855-m02" [08e98635-58d8-460b-9432-4bb03c74099c] Running
	I0429 19:02:07.761426   26778 system_pods.go:61] "kindnet-j42cd" [13d10343-b59f-490f-ac7c-973271cc27d2] Running
	I0429 19:02:07.761431   26778 system_pods.go:61] "kindnet-xdtp4" [510a69a6-5bd3-44ba-a81f-6d35a38b6ad2] Running
	I0429 19:02:07.761437   26778 system_pods.go:61] "kube-apiserver-ha-058855" [d2eb7bde-88b9-4366-be20-593097820579] Running
	I0429 19:02:07.761440   26778 system_pods.go:61] "kube-apiserver-ha-058855-m02" [94599f7a-b9de-4db3-b858-a380793bbd34] Running
	I0429 19:02:07.761444   26778 system_pods.go:61] "kube-controller-manager-ha-058855" [56527f4a-57d1-4a44-be01-7747abcbfce0] Running
	I0429 19:02:07.761448   26778 system_pods.go:61] "kube-controller-manager-ha-058855-m02" [201796e2-157c-40ce-bf68-c2472bab9e3a] Running
	I0429 19:02:07.761451   26778 system_pods.go:61] "kube-proxy-nz2rv" [32002a66-d55f-4011-bb78-c4c6e35238b3] Running
	I0429 19:02:07.761455   26778 system_pods.go:61] "kube-proxy-xldlc" [a01564cb-ea76-4cc5-abad-d2d70b79bf6d] Running
	I0429 19:02:07.761458   26778 system_pods.go:61] "kube-scheduler-ha-058855" [d71e876d-d5be-4671-924b-3fd828de92a1] Running
	I0429 19:02:07.761461   26778 system_pods.go:61] "kube-scheduler-ha-058855-m02" [69bbddf9-e5f6-4ede-abd0-762b0642fda4] Running
	I0429 19:02:07.761465   26778 system_pods.go:61] "kube-vip-ha-058855" [76e512c7-e0ea-417e-8239-63bb073dc04d] Running
	I0429 19:02:07.761468   26778 system_pods.go:61] "kube-vip-ha-058855-m02" [1569a60d-d6a1-4685-8405-689270322b97] Running
	I0429 19:02:07.761470   26778 system_pods.go:61] "storage-provisioner" [1572f7da-1bda-4b9e-a5fc-315aae3ba592] Running
	I0429 19:02:07.761476   26778 system_pods.go:74] duration metric: took 175.953408ms to wait for pod list to return data ...
	I0429 19:02:07.761487   26778 default_sa.go:34] waiting for default service account to be created ...
	I0429 19:02:07.948926   26778 request.go:629] Waited for 187.333923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:02:07.948993   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:02:07.948998   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:07.949005   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:07.949011   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:07.953595   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:07.953854   26778 default_sa.go:45] found service account: "default"
	I0429 19:02:07.953875   26778 default_sa.go:55] duration metric: took 192.380789ms for default service account to be created ...
	I0429 19:02:07.953892   26778 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 19:02:08.149354   26778 request.go:629] Waited for 195.395764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:02:08.149418   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:02:08.149425   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:08.149435   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:08.149443   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:08.157416   26778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:02:08.163248   26778 system_pods.go:86] 17 kube-system pods found
	I0429 19:02:08.163275   26778 system_pods.go:89] "coredns-7db6d8ff4d-bbq9x" [a016fbf8-4a91-4f2f-97da-44b6e2195885] Running
	I0429 19:02:08.163280   26778 system_pods.go:89] "coredns-7db6d8ff4d-njch8" [823d223d-f7bd-4b9c-bdd9-8d0ae063d449] Running
	I0429 19:02:08.163285   26778 system_pods.go:89] "etcd-ha-058855" [a7e579b9-771a-4bb2-819b-a98848f52b09] Running
	I0429 19:02:08.163289   26778 system_pods.go:89] "etcd-ha-058855-m02" [08e98635-58d8-460b-9432-4bb03c74099c] Running
	I0429 19:02:08.163293   26778 system_pods.go:89] "kindnet-j42cd" [13d10343-b59f-490f-ac7c-973271cc27d2] Running
	I0429 19:02:08.163297   26778 system_pods.go:89] "kindnet-xdtp4" [510a69a6-5bd3-44ba-a81f-6d35a38b6ad2] Running
	I0429 19:02:08.163301   26778 system_pods.go:89] "kube-apiserver-ha-058855" [d2eb7bde-88b9-4366-be20-593097820579] Running
	I0429 19:02:08.163305   26778 system_pods.go:89] "kube-apiserver-ha-058855-m02" [94599f7a-b9de-4db3-b858-a380793bbd34] Running
	I0429 19:02:08.163309   26778 system_pods.go:89] "kube-controller-manager-ha-058855" [56527f4a-57d1-4a44-be01-7747abcbfce0] Running
	I0429 19:02:08.163313   26778 system_pods.go:89] "kube-controller-manager-ha-058855-m02" [201796e2-157c-40ce-bf68-c2472bab9e3a] Running
	I0429 19:02:08.163319   26778 system_pods.go:89] "kube-proxy-nz2rv" [32002a66-d55f-4011-bb78-c4c6e35238b3] Running
	I0429 19:02:08.163323   26778 system_pods.go:89] "kube-proxy-xldlc" [a01564cb-ea76-4cc5-abad-d2d70b79bf6d] Running
	I0429 19:02:08.163328   26778 system_pods.go:89] "kube-scheduler-ha-058855" [d71e876d-d5be-4671-924b-3fd828de92a1] Running
	I0429 19:02:08.163333   26778 system_pods.go:89] "kube-scheduler-ha-058855-m02" [69bbddf9-e5f6-4ede-abd0-762b0642fda4] Running
	I0429 19:02:08.163338   26778 system_pods.go:89] "kube-vip-ha-058855" [76e512c7-e0ea-417e-8239-63bb073dc04d] Running
	I0429 19:02:08.163342   26778 system_pods.go:89] "kube-vip-ha-058855-m02" [1569a60d-d6a1-4685-8405-689270322b97] Running
	I0429 19:02:08.163348   26778 system_pods.go:89] "storage-provisioner" [1572f7da-1bda-4b9e-a5fc-315aae3ba592] Running
	I0429 19:02:08.163355   26778 system_pods.go:126] duration metric: took 209.454349ms to wait for k8s-apps to be running ...
	I0429 19:02:08.163369   26778 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 19:02:08.163413   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:02:08.179889   26778 system_svc.go:56] duration metric: took 16.512589ms WaitForService to wait for kubelet
	I0429 19:02:08.179921   26778 kubeadm.go:576] duration metric: took 16.859320064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:02:08.179940   26778 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:02:08.349388   26778 request.go:629] Waited for 169.36317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes
	I0429 19:02:08.349475   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes
	I0429 19:02:08.349482   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:08.349493   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:08.349511   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:08.354796   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:02:08.355527   26778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:02:08.355551   26778 node_conditions.go:123] node cpu capacity is 2
	I0429 19:02:08.355568   26778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:02:08.355573   26778 node_conditions.go:123] node cpu capacity is 2
	I0429 19:02:08.355589   26778 node_conditions.go:105] duration metric: took 175.640559ms to run NodePressure ...
	I0429 19:02:08.355604   26778 start.go:240] waiting for startup goroutines ...
	I0429 19:02:08.355639   26778 start.go:254] writing updated cluster config ...
	I0429 19:02:08.357710   26778 out.go:177] 
	I0429 19:02:08.359265   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:02:08.359376   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 19:02:08.361100   26778 out.go:177] * Starting "ha-058855-m03" control-plane node in "ha-058855" cluster
	I0429 19:02:08.362355   26778 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:02:08.362385   26778 cache.go:56] Caching tarball of preloaded images
	I0429 19:02:08.362500   26778 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 19:02:08.362513   26778 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 19:02:08.362613   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 19:02:08.362808   26778 start.go:360] acquireMachinesLock for ha-058855-m03: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:02:08.362872   26778 start.go:364] duration metric: took 41.606µs to acquireMachinesLock for "ha-058855-m03"
	I0429 19:02:08.362897   26778 start.go:93] Provisioning new machine with config: &{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:02:08.363007   26778 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0429 19:02:08.364585   26778 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 19:02:08.364702   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:02:08.364749   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:02:08.379686   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45739
	I0429 19:02:08.380148   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:02:08.380572   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:02:08.380594   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:02:08.380985   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:02:08.381208   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetMachineName
	I0429 19:02:08.381371   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:08.381582   26778 start.go:159] libmachine.API.Create for "ha-058855" (driver="kvm2")
	I0429 19:02:08.381617   26778 client.go:168] LocalClient.Create starting
	I0429 19:02:08.381660   26778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 19:02:08.381702   26778 main.go:141] libmachine: Decoding PEM data...
	I0429 19:02:08.381724   26778 main.go:141] libmachine: Parsing certificate...
	I0429 19:02:08.381788   26778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 19:02:08.381816   26778 main.go:141] libmachine: Decoding PEM data...
	I0429 19:02:08.381829   26778 main.go:141] libmachine: Parsing certificate...
	I0429 19:02:08.381855   26778 main.go:141] libmachine: Running pre-create checks...
	I0429 19:02:08.381866   26778 main.go:141] libmachine: (ha-058855-m03) Calling .PreCreateCheck
	I0429 19:02:08.382040   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetConfigRaw
	I0429 19:02:08.382510   26778 main.go:141] libmachine: Creating machine...
	I0429 19:02:08.382529   26778 main.go:141] libmachine: (ha-058855-m03) Calling .Create
	I0429 19:02:08.382664   26778 main.go:141] libmachine: (ha-058855-m03) Creating KVM machine...
	I0429 19:02:08.384200   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found existing default KVM network
	I0429 19:02:08.384300   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found existing private KVM network mk-ha-058855
	I0429 19:02:08.384458   26778 main.go:141] libmachine: (ha-058855-m03) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03 ...
	I0429 19:02:08.384489   26778 main.go:141] libmachine: (ha-058855-m03) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 19:02:08.384545   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:08.384452   28843 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:02:08.384682   26778 main.go:141] libmachine: (ha-058855-m03) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 19:02:08.613282   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:08.613105   28843 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa...
	I0429 19:02:08.790681   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:08.790569   28843 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/ha-058855-m03.rawdisk...
	I0429 19:02:08.790712   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Writing magic tar header
	I0429 19:02:08.790728   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Writing SSH key tar header
	I0429 19:02:08.790829   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:08.790771   28843 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03 ...
	I0429 19:02:08.790928   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03
	I0429 19:02:08.790947   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 19:02:08.790956   26778 main.go:141] libmachine: (ha-058855-m03) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03 (perms=drwx------)
	I0429 19:02:08.790963   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:02:08.790973   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 19:02:08.790982   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 19:02:08.790990   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home/jenkins
	I0429 19:02:08.790997   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home
	I0429 19:02:08.791004   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Skipping /home - not owner
	I0429 19:02:08.791015   26778 main.go:141] libmachine: (ha-058855-m03) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 19:02:08.791024   26778 main.go:141] libmachine: (ha-058855-m03) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 19:02:08.791033   26778 main.go:141] libmachine: (ha-058855-m03) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 19:02:08.791042   26778 main.go:141] libmachine: (ha-058855-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 19:02:08.791049   26778 main.go:141] libmachine: (ha-058855-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 19:02:08.791056   26778 main.go:141] libmachine: (ha-058855-m03) Creating domain...
	I0429 19:02:08.792048   26778 main.go:141] libmachine: (ha-058855-m03) define libvirt domain using xml: 
	I0429 19:02:08.792077   26778 main.go:141] libmachine: (ha-058855-m03) <domain type='kvm'>
	I0429 19:02:08.792099   26778 main.go:141] libmachine: (ha-058855-m03)   <name>ha-058855-m03</name>
	I0429 19:02:08.792117   26778 main.go:141] libmachine: (ha-058855-m03)   <memory unit='MiB'>2200</memory>
	I0429 19:02:08.792140   26778 main.go:141] libmachine: (ha-058855-m03)   <vcpu>2</vcpu>
	I0429 19:02:08.792157   26778 main.go:141] libmachine: (ha-058855-m03)   <features>
	I0429 19:02:08.792162   26778 main.go:141] libmachine: (ha-058855-m03)     <acpi/>
	I0429 19:02:08.792167   26778 main.go:141] libmachine: (ha-058855-m03)     <apic/>
	I0429 19:02:08.792172   26778 main.go:141] libmachine: (ha-058855-m03)     <pae/>
	I0429 19:02:08.792177   26778 main.go:141] libmachine: (ha-058855-m03)     
	I0429 19:02:08.792186   26778 main.go:141] libmachine: (ha-058855-m03)   </features>
	I0429 19:02:08.792198   26778 main.go:141] libmachine: (ha-058855-m03)   <cpu mode='host-passthrough'>
	I0429 19:02:08.792213   26778 main.go:141] libmachine: (ha-058855-m03)   
	I0429 19:02:08.792223   26778 main.go:141] libmachine: (ha-058855-m03)   </cpu>
	I0429 19:02:08.792229   26778 main.go:141] libmachine: (ha-058855-m03)   <os>
	I0429 19:02:08.792238   26778 main.go:141] libmachine: (ha-058855-m03)     <type>hvm</type>
	I0429 19:02:08.792256   26778 main.go:141] libmachine: (ha-058855-m03)     <boot dev='cdrom'/>
	I0429 19:02:08.792273   26778 main.go:141] libmachine: (ha-058855-m03)     <boot dev='hd'/>
	I0429 19:02:08.792288   26778 main.go:141] libmachine: (ha-058855-m03)     <bootmenu enable='no'/>
	I0429 19:02:08.792297   26778 main.go:141] libmachine: (ha-058855-m03)   </os>
	I0429 19:02:08.792308   26778 main.go:141] libmachine: (ha-058855-m03)   <devices>
	I0429 19:02:08.792327   26778 main.go:141] libmachine: (ha-058855-m03)     <disk type='file' device='cdrom'>
	I0429 19:02:08.792348   26778 main.go:141] libmachine: (ha-058855-m03)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/boot2docker.iso'/>
	I0429 19:02:08.792365   26778 main.go:141] libmachine: (ha-058855-m03)       <target dev='hdc' bus='scsi'/>
	I0429 19:02:08.792378   26778 main.go:141] libmachine: (ha-058855-m03)       <readonly/>
	I0429 19:02:08.792389   26778 main.go:141] libmachine: (ha-058855-m03)     </disk>
	I0429 19:02:08.792412   26778 main.go:141] libmachine: (ha-058855-m03)     <disk type='file' device='disk'>
	I0429 19:02:08.792427   26778 main.go:141] libmachine: (ha-058855-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 19:02:08.792443   26778 main.go:141] libmachine: (ha-058855-m03)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/ha-058855-m03.rawdisk'/>
	I0429 19:02:08.792457   26778 main.go:141] libmachine: (ha-058855-m03)       <target dev='hda' bus='virtio'/>
	I0429 19:02:08.792468   26778 main.go:141] libmachine: (ha-058855-m03)     </disk>
	I0429 19:02:08.792481   26778 main.go:141] libmachine: (ha-058855-m03)     <interface type='network'>
	I0429 19:02:08.792492   26778 main.go:141] libmachine: (ha-058855-m03)       <source network='mk-ha-058855'/>
	I0429 19:02:08.792502   26778 main.go:141] libmachine: (ha-058855-m03)       <model type='virtio'/>
	I0429 19:02:08.792513   26778 main.go:141] libmachine: (ha-058855-m03)     </interface>
	I0429 19:02:08.792533   26778 main.go:141] libmachine: (ha-058855-m03)     <interface type='network'>
	I0429 19:02:08.792555   26778 main.go:141] libmachine: (ha-058855-m03)       <source network='default'/>
	I0429 19:02:08.792567   26778 main.go:141] libmachine: (ha-058855-m03)       <model type='virtio'/>
	I0429 19:02:08.792592   26778 main.go:141] libmachine: (ha-058855-m03)     </interface>
	I0429 19:02:08.792613   26778 main.go:141] libmachine: (ha-058855-m03)     <serial type='pty'>
	I0429 19:02:08.792624   26778 main.go:141] libmachine: (ha-058855-m03)       <target port='0'/>
	I0429 19:02:08.792640   26778 main.go:141] libmachine: (ha-058855-m03)     </serial>
	I0429 19:02:08.792657   26778 main.go:141] libmachine: (ha-058855-m03)     <console type='pty'>
	I0429 19:02:08.792679   26778 main.go:141] libmachine: (ha-058855-m03)       <target type='serial' port='0'/>
	I0429 19:02:08.792694   26778 main.go:141] libmachine: (ha-058855-m03)     </console>
	I0429 19:02:08.792703   26778 main.go:141] libmachine: (ha-058855-m03)     <rng model='virtio'>
	I0429 19:02:08.792715   26778 main.go:141] libmachine: (ha-058855-m03)       <backend model='random'>/dev/random</backend>
	I0429 19:02:08.792727   26778 main.go:141] libmachine: (ha-058855-m03)     </rng>
	I0429 19:02:08.792739   26778 main.go:141] libmachine: (ha-058855-m03)     
	I0429 19:02:08.792751   26778 main.go:141] libmachine: (ha-058855-m03)     
	I0429 19:02:08.792762   26778 main.go:141] libmachine: (ha-058855-m03)   </devices>
	I0429 19:02:08.792774   26778 main.go:141] libmachine: (ha-058855-m03) </domain>
	I0429 19:02:08.792783   26778 main.go:141] libmachine: (ha-058855-m03) 
	I0429 19:02:08.799685   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:e5:cf:5c in network default
	I0429 19:02:08.800324   26778 main.go:141] libmachine: (ha-058855-m03) Ensuring networks are active...
	I0429 19:02:08.800341   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:08.801029   26778 main.go:141] libmachine: (ha-058855-m03) Ensuring network default is active
	I0429 19:02:08.801344   26778 main.go:141] libmachine: (ha-058855-m03) Ensuring network mk-ha-058855 is active
	I0429 19:02:08.801736   26778 main.go:141] libmachine: (ha-058855-m03) Getting domain xml...
	I0429 19:02:08.802442   26778 main.go:141] libmachine: (ha-058855-m03) Creating domain...
	I0429 19:02:10.035797   26778 main.go:141] libmachine: (ha-058855-m03) Waiting to get IP...
	I0429 19:02:10.036693   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:10.037215   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:10.037275   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:10.037211   28843 retry.go:31] will retry after 205.30777ms: waiting for machine to come up
	I0429 19:02:10.244541   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:10.245019   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:10.245048   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:10.244956   28843 retry.go:31] will retry after 360.234026ms: waiting for machine to come up
	I0429 19:02:10.606436   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:10.606889   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:10.606922   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:10.606815   28843 retry.go:31] will retry after 331.023484ms: waiting for machine to come up
	I0429 19:02:10.939402   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:10.939850   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:10.939872   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:10.939820   28843 retry.go:31] will retry after 374.808223ms: waiting for machine to come up
	I0429 19:02:11.316070   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:11.316490   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:11.316522   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:11.316429   28843 retry.go:31] will retry after 738.608974ms: waiting for machine to come up
	I0429 19:02:12.056259   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:12.056713   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:12.056753   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:12.056663   28843 retry.go:31] will retry after 651.218996ms: waiting for machine to come up
	I0429 19:02:12.708916   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:12.709538   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:12.709595   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:12.709483   28843 retry.go:31] will retry after 1.03070831s: waiting for machine to come up
	I0429 19:02:13.742455   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:13.742918   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:13.742947   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:13.742885   28843 retry.go:31] will retry after 1.458077686s: waiting for machine to come up
	I0429 19:02:15.203432   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:15.203828   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:15.203874   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:15.203783   28843 retry.go:31] will retry after 1.838914254s: waiting for machine to come up
	I0429 19:02:17.044416   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:17.044802   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:17.044826   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:17.044759   28843 retry.go:31] will retry after 1.717712909s: waiting for machine to come up
	I0429 19:02:18.764219   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:18.764743   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:18.764820   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:18.764760   28843 retry.go:31] will retry after 2.395935751s: waiting for machine to come up
	I0429 19:02:21.163089   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:21.163488   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:21.163520   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:21.163440   28843 retry.go:31] will retry after 3.531379998s: waiting for machine to come up
	I0429 19:02:24.696789   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:24.697155   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:24.697182   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:24.697111   28843 retry.go:31] will retry after 3.999554375s: waiting for machine to come up
	I0429 19:02:28.698037   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:28.698491   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:28.698521   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:28.698441   28843 retry.go:31] will retry after 4.45435299s: waiting for machine to come up
	I0429 19:02:33.155149   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:33.155672   26778 main.go:141] libmachine: (ha-058855-m03) Found IP for machine: 192.168.39.215
	I0429 19:02:33.155695   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has current primary IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:33.155704   26778 main.go:141] libmachine: (ha-058855-m03) Reserving static IP address...
	I0429 19:02:33.156035   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find host DHCP lease matching {name: "ha-058855-m03", mac: "52:54:00:78:23:56", ip: "192.168.39.215"} in network mk-ha-058855
	I0429 19:02:33.230932   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Getting to WaitForSSH function...
	I0429 19:02:33.230964   26778 main.go:141] libmachine: (ha-058855-m03) Reserved static IP address: 192.168.39.215
	I0429 19:02:33.230979   26778 main.go:141] libmachine: (ha-058855-m03) Waiting for SSH to be available...
	I0429 19:02:33.233825   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:33.234284   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855
	I0429 19:02:33.234315   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find defined IP address of network mk-ha-058855 interface with MAC address 52:54:00:78:23:56
	I0429 19:02:33.234471   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Using SSH client type: external
	I0429 19:02:33.234492   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa (-rw-------)
	I0429 19:02:33.234521   26778 main.go:141] libmachine: (ha-058855-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:02:33.234534   26778 main.go:141] libmachine: (ha-058855-m03) DBG | About to run SSH command:
	I0429 19:02:33.234546   26778 main.go:141] libmachine: (ha-058855-m03) DBG | exit 0
	I0429 19:02:33.238208   26778 main.go:141] libmachine: (ha-058855-m03) DBG | SSH cmd err, output: exit status 255: 
	I0429 19:02:33.238229   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0429 19:02:33.238236   26778 main.go:141] libmachine: (ha-058855-m03) DBG | command : exit 0
	I0429 19:02:33.238241   26778 main.go:141] libmachine: (ha-058855-m03) DBG | err     : exit status 255
	I0429 19:02:33.238282   26778 main.go:141] libmachine: (ha-058855-m03) DBG | output  : 
	I0429 19:02:36.238448   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Getting to WaitForSSH function...
	I0429 19:02:36.240759   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.241126   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.241159   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.241288   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Using SSH client type: external
	I0429 19:02:36.241305   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa (-rw-------)
	I0429 19:02:36.241332   26778 main.go:141] libmachine: (ha-058855-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:02:36.241347   26778 main.go:141] libmachine: (ha-058855-m03) DBG | About to run SSH command:
	I0429 19:02:36.241357   26778 main.go:141] libmachine: (ha-058855-m03) DBG | exit 0
	I0429 19:02:36.370936   26778 main.go:141] libmachine: (ha-058855-m03) DBG | SSH cmd err, output: <nil>: 
	I0429 19:02:36.371201   26778 main.go:141] libmachine: (ha-058855-m03) KVM machine creation complete!
	I0429 19:02:36.371505   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetConfigRaw
	I0429 19:02:36.372035   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:36.372218   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:36.372422   26778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 19:02:36.372444   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetState
	I0429 19:02:36.373794   26778 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 19:02:36.373815   26778 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 19:02:36.373823   26778 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 19:02:36.373833   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:36.376171   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.376554   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.376577   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.376800   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:36.377013   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.377179   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.377334   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:36.377526   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:02:36.377774   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0429 19:02:36.377788   26778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 19:02:36.493886   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:02:36.493908   26778 main.go:141] libmachine: Detecting the provisioner...
	I0429 19:02:36.493916   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:36.496489   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.496864   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.496897   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.497041   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:36.497239   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.497395   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.497551   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:36.497737   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:02:36.497944   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0429 19:02:36.497960   26778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 19:02:36.611677   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 19:02:36.611756   26778 main.go:141] libmachine: found compatible host: buildroot
	I0429 19:02:36.611771   26778 main.go:141] libmachine: Provisioning with buildroot...
	I0429 19:02:36.611783   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetMachineName
	I0429 19:02:36.612077   26778 buildroot.go:166] provisioning hostname "ha-058855-m03"
	I0429 19:02:36.612107   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetMachineName
	I0429 19:02:36.612296   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:36.615206   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.615663   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.615699   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.615838   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:36.616000   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.616186   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.616340   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:36.616522   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:02:36.616700   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0429 19:02:36.616713   26778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-058855-m03 && echo "ha-058855-m03" | sudo tee /etc/hostname
	I0429 19:02:36.755088   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-058855-m03
	
	I0429 19:02:36.755134   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:36.757679   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.757979   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.758014   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.758219   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:36.758409   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.758550   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.758696   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:36.758844   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:02:36.759005   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0429 19:02:36.759022   26778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-058855-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-058855-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-058855-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:02:36.887249   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:02:36.887285   26778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:02:36.887302   26778 buildroot.go:174] setting up certificates
	I0429 19:02:36.887313   26778 provision.go:84] configureAuth start
	I0429 19:02:36.887321   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetMachineName
	I0429 19:02:36.887665   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:02:36.890544   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.891010   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.891052   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.891197   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:36.893127   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.893425   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.893457   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.893582   26778 provision.go:143] copyHostCerts
	I0429 19:02:36.893622   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:02:36.893669   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:02:36.893681   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:02:36.893768   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:02:36.893861   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:02:36.893890   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:02:36.893913   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:02:36.893966   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:02:36.894030   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:02:36.894055   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:02:36.894080   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:02:36.894116   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:02:36.894185   26778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.ha-058855-m03 san=[127.0.0.1 192.168.39.215 ha-058855-m03 localhost minikube]
	I0429 19:02:37.309547   26778 provision.go:177] copyRemoteCerts
	I0429 19:02:37.309631   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:02:37.309662   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:37.312216   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.312602   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:37.312637   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.312788   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:37.312983   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:37.313179   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:37.313324   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:02:37.402259   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 19:02:37.402353   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:02:37.433368   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 19:02:37.433440   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 19:02:37.462744   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 19:02:37.462823   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:02:37.492777   26778 provision.go:87] duration metric: took 605.454335ms to configureAuth
	I0429 19:02:37.492803   26778 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:02:37.493003   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:02:37.493074   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:37.495751   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.496046   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:37.496079   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.496233   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:37.496448   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:37.496618   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:37.496815   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:37.496993   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:02:37.497190   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0429 19:02:37.497207   26778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:02:37.809266   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:02:37.809294   26778 main.go:141] libmachine: Checking connection to Docker...
	I0429 19:02:37.809305   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetURL
	I0429 19:02:37.810692   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Using libvirt version 6000000
	I0429 19:02:37.813109   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.813502   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:37.813536   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.813692   26778 main.go:141] libmachine: Docker is up and running!
	I0429 19:02:37.813763   26778 main.go:141] libmachine: Reticulating splines...
	I0429 19:02:37.813779   26778 client.go:171] duration metric: took 29.432154059s to LocalClient.Create
	I0429 19:02:37.813814   26778 start.go:167] duration metric: took 29.432234477s to libmachine.API.Create "ha-058855"
	I0429 19:02:37.813828   26778 start.go:293] postStartSetup for "ha-058855-m03" (driver="kvm2")
	I0429 19:02:37.813841   26778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:02:37.813864   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:37.814271   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:02:37.814300   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:37.817054   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.817370   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:37.817402   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.817550   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:37.817734   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:37.817880   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:37.818033   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:02:37.910621   26778 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:02:37.915741   26778 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:02:37.915770   26778 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:02:37.915856   26778 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:02:37.915950   26778 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:02:37.915961   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /etc/ssl/certs/151242.pem
	I0429 19:02:37.916067   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:02:37.928269   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:02:37.956261   26778 start.go:296] duration metric: took 142.421236ms for postStartSetup
	I0429 19:02:37.956321   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetConfigRaw
	I0429 19:02:37.957015   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:02:37.959500   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.959944   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:37.959976   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.960290   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 19:02:37.960552   26778 start.go:128] duration metric: took 29.597532358s to createHost
	I0429 19:02:37.960586   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:37.962770   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.963234   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:37.963271   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.963433   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:37.963613   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:37.963801   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:37.963972   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:37.964170   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:02:37.964399   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0429 19:02:37.964417   26778 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:02:38.083548   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714417358.071138040
	
	I0429 19:02:38.083570   26778 fix.go:216] guest clock: 1714417358.071138040
	I0429 19:02:38.083578   26778 fix.go:229] Guest: 2024-04-29 19:02:38.07113804 +0000 UTC Remote: 2024-04-29 19:02:37.96056996 +0000 UTC m=+232.025782840 (delta=110.56808ms)
	I0429 19:02:38.083592   26778 fix.go:200] guest clock delta is within tolerance: 110.56808ms
	I0429 19:02:38.083596   26778 start.go:83] releasing machines lock for "ha-058855-m03", held for 29.720713421s
	I0429 19:02:38.083611   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:38.083908   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:02:38.086506   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:38.086932   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:38.086962   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:38.089023   26778 out.go:177] * Found network options:
	I0429 19:02:38.090341   26778 out.go:177]   - NO_PROXY=192.168.39.52,192.168.39.27
	W0429 19:02:38.091645   26778 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:02:38.091670   26778 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:02:38.091683   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:38.092207   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:38.092425   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:38.092509   26778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:02:38.092551   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	W0429 19:02:38.092647   26778 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:02:38.092671   26778 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:02:38.092768   26778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:02:38.092790   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:38.095236   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:38.095576   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:38.095622   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:38.095649   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:38.095757   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:38.095947   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:38.096130   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:38.096155   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:38.096150   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:38.096329   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:02:38.096343   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:38.096504   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:38.096680   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:38.096859   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:02:38.343466   26778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:02:38.351513   26778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:02:38.351589   26778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:02:38.375353   26778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:02:38.375379   26778 start.go:494] detecting cgroup driver to use...
	I0429 19:02:38.375452   26778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:02:38.397208   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:02:38.418298   26778 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:02:38.418422   26778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:02:38.436518   26778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:02:38.453908   26778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:02:38.588024   26778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:02:38.745271   26778 docker.go:233] disabling docker service ...
	I0429 19:02:38.745365   26778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:02:38.762514   26778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:02:38.779768   26778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:02:38.939144   26778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:02:39.088367   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:02:39.104841   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:02:39.129824   26778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 19:02:39.129879   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.142601   26778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:02:39.142674   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.154592   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.166689   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.179184   26778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:02:39.192067   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.204521   26778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.226575   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.238581   26778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:02:39.248932   26778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 19:02:39.248996   26778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 19:02:39.266556   26778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:02:39.279346   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:02:39.434284   26778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:02:39.601503   26778 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:02:39.601594   26778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:02:39.607316   26778 start.go:562] Will wait 60s for crictl version
	I0429 19:02:39.607388   26778 ssh_runner.go:195] Run: which crictl
	I0429 19:02:39.611697   26778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:02:39.659249   26778 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:02:39.659378   26778 ssh_runner.go:195] Run: crio --version
	I0429 19:02:39.690516   26778 ssh_runner.go:195] Run: crio --version
	I0429 19:02:39.729860   26778 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 19:02:39.731245   26778 out.go:177]   - env NO_PROXY=192.168.39.52
	I0429 19:02:39.732491   26778 out.go:177]   - env NO_PROXY=192.168.39.52,192.168.39.27
	I0429 19:02:39.733604   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:02:39.736040   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:39.736447   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:39.736470   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:39.736659   26778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 19:02:39.742285   26778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:02:39.756775   26778 mustload.go:65] Loading cluster: ha-058855
	I0429 19:02:39.757045   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:02:39.757316   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:02:39.757351   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:02:39.773951   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0429 19:02:39.774445   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:02:39.774932   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:02:39.774961   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:02:39.775297   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:02:39.775505   26778 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:02:39.777196   26778 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:02:39.777471   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:02:39.777505   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:02:39.792184   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I0429 19:02:39.792554   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:02:39.793011   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:02:39.793038   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:02:39.793327   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:02:39.793506   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:02:39.793691   26778 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855 for IP: 192.168.39.215
	I0429 19:02:39.793706   26778 certs.go:194] generating shared ca certs ...
	I0429 19:02:39.793721   26778 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:02:39.793849   26778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:02:39.793893   26778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:02:39.793904   26778 certs.go:256] generating profile certs ...
	I0429 19:02:39.793971   26778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key
	I0429 19:02:39.794003   26778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.9163a6e8
	I0429 19:02:39.794035   26778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.9163a6e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.52 192.168.39.27 192.168.39.215 192.168.39.254]
	I0429 19:02:39.991904   26778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.9163a6e8 ...
	I0429 19:02:39.991934   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.9163a6e8: {Name:mkf6aafe3c448ab66972fe7404e3da8fa4ed24be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:02:39.992108   26778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.9163a6e8 ...
	I0429 19:02:39.992125   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.9163a6e8: {Name:mk5a0d385f233676a34eab1265452db88346fefc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:02:39.992226   26778 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.9163a6e8 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt
	I0429 19:02:39.992394   26778 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.9163a6e8 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key
	I0429 19:02:39.992561   26778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key
	I0429 19:02:39.992580   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:02:39.992601   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:02:39.992621   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:02:39.992643   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:02:39.992660   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:02:39.992677   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:02:39.992694   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:02:39.992711   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:02:39.992773   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:02:39.992812   26778 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:02:39.992825   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:02:39.992855   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:02:39.992885   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:02:39.992911   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:02:39.992964   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:02:39.993006   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:02:39.993025   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem -> /usr/share/ca-certificates/15124.pem
	I0429 19:02:39.993043   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /usr/share/ca-certificates/151242.pem
	I0429 19:02:39.993081   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:02:39.996247   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:02:39.996613   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:02:39.996640   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:02:39.996754   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:02:39.996959   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:02:39.997100   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:02:39.997217   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:02:40.086469   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 19:02:40.093590   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 19:02:40.107018   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 19:02:40.113388   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0429 19:02:40.128724   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 19:02:40.133791   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 19:02:40.148153   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 19:02:40.153260   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0429 19:02:40.167027   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 19:02:40.172164   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 19:02:40.184500   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 19:02:40.189911   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 19:02:40.202889   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:02:40.234045   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:02:40.260292   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:02:40.287864   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:02:40.316781   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0429 19:02:40.345179   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 19:02:40.373750   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:02:40.403708   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 19:02:40.432090   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:02:40.459413   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:02:40.490901   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:02:40.518701   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 19:02:40.538907   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0429 19:02:40.559221   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 19:02:40.578670   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0429 19:02:40.598072   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 19:02:40.616893   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 19:02:40.636592   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 19:02:40.655859   26778 ssh_runner.go:195] Run: openssl version
	I0429 19:02:40.662093   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:02:40.674096   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:02:40.679433   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:02:40.679485   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:02:40.685942   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:02:40.698587   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:02:40.711531   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:02:40.717116   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:02:40.717184   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:02:40.724132   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:02:40.736969   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:02:40.749856   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:02:40.755390   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:02:40.755438   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:02:40.761984   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:02:40.774254   26778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:02:40.779102   26778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:02:40.779168   26778 kubeadm.go:928] updating node {m03 192.168.39.215 8443 v1.30.0 crio true true} ...
	I0429 19:02:40.779258   26778 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-058855-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:02:40.779288   26778 kube-vip.go:115] generating kube-vip config ...
	I0429 19:02:40.779317   26778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 19:02:40.797341   26778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 19:02:40.797421   26778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 19:02:40.797483   26778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:02:40.809371   26778 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 19:02:40.809435   26778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 19:02:40.821464   26778 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 19:02:40.821473   26778 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0429 19:02:40.821489   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:02:40.821509   26778 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0429 19:02:40.821516   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:02:40.821529   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:02:40.821575   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:02:40.821594   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:02:40.841709   26778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 19:02:40.841752   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 19:02:40.841754   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:02:40.841829   26778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 19:02:40.841845   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:02:40.841855   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 19:02:40.895363   26778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 19:02:40.895410   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 19:02:41.867601   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 19:02:41.880075   26778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0429 19:02:41.900860   26778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:02:41.921885   26778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0429 19:02:41.943067   26778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 19:02:41.948246   26778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:02:41.964016   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:02:42.112311   26778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:02:42.135230   26778 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:02:42.135576   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:02:42.135614   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:02:42.152743   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0429 19:02:42.153274   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:02:42.153761   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:02:42.153785   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:02:42.154122   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:02:42.154324   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:02:42.154469   26778 start.go:316] joinCluster: &{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:02:42.154646   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 19:02:42.154672   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:02:42.158209   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:02:42.158721   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:02:42.158756   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:02:42.158969   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:02:42.159195   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:02:42.159371   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:02:42.159552   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:02:42.362024   26778 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:02:42.362090   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0snnwf.nqbstml13rkzgrsg --discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-058855-m03 --control-plane --apiserver-advertise-address=192.168.39.215 --apiserver-bind-port=8443"
	I0429 19:03:07.495774   26778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0snnwf.nqbstml13rkzgrsg --discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-058855-m03 --control-plane --apiserver-advertise-address=192.168.39.215 --apiserver-bind-port=8443": (25.133648499s)
	I0429 19:03:07.495812   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 19:03:08.135577   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-058855-m03 minikube.k8s.io/updated_at=2024_04_29T19_03_08_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-058855 minikube.k8s.io/primary=false
	I0429 19:03:08.280836   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-058855-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 19:03:08.434664   26778 start.go:318] duration metric: took 26.280192185s to joinCluster
	I0429 19:03:08.434750   26778 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:03:08.436414   26778 out.go:177] * Verifying Kubernetes components...
	I0429 19:03:08.435176   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:03:08.437771   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:03:08.666204   26778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:03:08.683821   26778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:03:08.684159   26778 kapi.go:59] client config for ha-058855: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.crt", KeyFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key", CAFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 19:03:08.684257   26778 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.52:8443
	I0429 19:03:08.684575   26778 node_ready.go:35] waiting up to 6m0s for node "ha-058855-m03" to be "Ready" ...
	I0429 19:03:08.684675   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:08.684687   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:08.684697   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:08.684706   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:08.688603   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:09.184793   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:09.184818   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:09.184827   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:09.184831   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:09.189995   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:03:09.685424   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:09.685448   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:09.685459   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:09.685464   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:09.689593   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:10.185544   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:10.185567   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:10.185576   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:10.185581   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:10.190409   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:10.684943   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:10.684963   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:10.684969   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:10.684972   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:10.689628   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:10.690791   26778 node_ready.go:53] node "ha-058855-m03" has status "Ready":"False"
	I0429 19:03:11.185285   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:11.185315   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:11.185327   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:11.185332   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:11.188959   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:11.684929   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:11.684950   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:11.684961   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:11.684966   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:11.689126   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:12.185695   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:12.185720   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:12.185737   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:12.185744   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:12.190216   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:12.685151   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:12.685177   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:12.685186   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:12.685189   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:12.689239   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:13.185654   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:13.185681   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:13.185691   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:13.185695   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:13.190613   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:13.191427   26778 node_ready.go:53] node "ha-058855-m03" has status "Ready":"False"
	I0429 19:03:13.685743   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:13.685766   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:13.685774   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:13.685778   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:13.692609   26778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:03:14.185752   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:14.185772   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:14.185780   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:14.185786   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:14.190560   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:14.684852   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:14.684871   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:14.684879   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:14.684885   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:14.688661   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:15.185463   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:15.185492   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:15.185502   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:15.185507   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:15.190585   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:03:15.191676   26778 node_ready.go:53] node "ha-058855-m03" has status "Ready":"False"
	I0429 19:03:15.685666   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:15.685692   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:15.685703   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:15.685710   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:15.697883   26778 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 19:03:16.185080   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:16.185102   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.185110   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.185116   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.189212   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:16.189931   26778 node_ready.go:49] node "ha-058855-m03" has status "Ready":"True"
	I0429 19:03:16.189947   26778 node_ready.go:38] duration metric: took 7.505347329s for node "ha-058855-m03" to be "Ready" ...
	I0429 19:03:16.189955   26778 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:03:16.190009   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:03:16.190018   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.190025   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.190029   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.197217   26778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:03:16.204930   26778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bbq9x" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.205009   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bbq9x
	I0429 19:03:16.205018   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.205025   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.205030   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.208475   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.209464   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:16.209482   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.209494   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.209500   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.213112   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.213556   26778 pod_ready.go:92] pod "coredns-7db6d8ff4d-bbq9x" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:16.213573   26778 pod_ready.go:81] duration metric: took 8.617213ms for pod "coredns-7db6d8ff4d-bbq9x" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.213585   26778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-njch8" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.213642   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-njch8
	I0429 19:03:16.213667   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.213681   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.213693   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.217199   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.217860   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:16.217875   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.217881   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.217884   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.220793   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:16.221539   26778 pod_ready.go:92] pod "coredns-7db6d8ff4d-njch8" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:16.221561   26778 pod_ready.go:81] duration metric: took 7.964356ms for pod "coredns-7db6d8ff4d-njch8" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.221573   26778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.221642   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855
	I0429 19:03:16.221650   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.221657   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.221664   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.224856   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.225524   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:16.225538   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.225545   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.225548   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.228517   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:16.229130   26778 pod_ready.go:92] pod "etcd-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:16.229154   26778 pod_ready.go:81] duration metric: took 7.568737ms for pod "etcd-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.229167   26778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.229236   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:03:16.229248   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.229258   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.229269   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.232144   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:16.232920   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:16.232938   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.232948   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.232954   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.235772   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:16.236461   26778 pod_ready.go:92] pod "etcd-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:16.236476   26778 pod_ready.go:81] duration metric: took 7.297385ms for pod "etcd-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.236485   26778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.385852   26778 request.go:629] Waited for 149.315468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:16.385926   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:16.385932   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.385938   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.385942   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.389444   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.585762   26778 request.go:629] Waited for 195.359427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:16.585816   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:16.585821   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.585831   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.585836   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.589427   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.785531   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:16.785566   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.785576   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.785584   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.789426   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.985809   26778 request.go:629] Waited for 195.39075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:16.985896   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:16.985904   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.985914   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.985922   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.990297   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:17.236736   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:17.236763   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:17.236774   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:17.236783   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:17.241950   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:03:17.385859   26778 request.go:629] Waited for 142.304779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:17.385918   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:17.385923   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:17.385930   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:17.385933   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:17.389434   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:17.737476   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:17.737502   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:17.737508   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:17.737512   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:17.741096   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:17.785407   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:17.785426   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:17.785434   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:17.785445   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:17.788949   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:18.236936   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:18.236957   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:18.236965   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:18.236969   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:18.241328   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:18.242530   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:18.242547   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:18.242559   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:18.242567   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:18.246029   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:18.247007   26778 pod_ready.go:102] pod "etcd-ha-058855-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 19:03:18.737275   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:18.737298   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:18.737306   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:18.737311   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:18.740677   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:18.741639   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:18.741657   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:18.741665   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:18.741670   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:18.745453   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:19.237477   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:19.237502   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:19.237510   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:19.237512   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:19.243186   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:03:19.243915   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:19.243929   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:19.243936   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:19.243940   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:19.248158   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:19.736926   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:19.736944   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:19.736951   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:19.736955   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:19.741603   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:19.742409   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:19.742429   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:19.742440   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:19.742446   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:19.745289   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:20.236917   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:20.236941   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:20.236948   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:20.236952   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:20.240954   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:20.241911   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:20.241930   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:20.241940   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:20.241946   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:20.246457   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:20.247195   26778 pod_ready.go:102] pod "etcd-ha-058855-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 19:03:20.737686   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:20.737706   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:20.737714   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:20.737720   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:20.741368   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:20.742258   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:20.742277   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:20.742288   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:20.742295   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:20.746684   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:21.236629   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:21.236651   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:21.236660   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:21.236664   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:21.240181   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:21.240922   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:21.240935   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:21.240942   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:21.240945   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:21.243692   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:21.737261   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:21.737286   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:21.737293   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:21.737299   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:21.741088   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:21.742043   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:21.742081   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:21.742097   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:21.742104   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:21.746837   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:22.238016   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:22.238134   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:22.238155   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:22.238163   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:22.243097   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:22.243938   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:22.243952   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:22.243959   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:22.243963   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:22.247173   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:22.247951   26778 pod_ready.go:102] pod "etcd-ha-058855-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 19:03:22.737178   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:22.737201   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:22.737210   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:22.737217   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:22.741015   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:22.741963   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:22.741981   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:22.741989   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:22.741994   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:22.744852   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:23.236994   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:23.237017   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.237026   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.237030   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.240942   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:23.242037   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:23.242055   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.242079   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.242085   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.246311   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:23.737687   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:23.737715   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.737723   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.737727   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.741346   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:23.742163   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:23.742183   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.742193   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.742202   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.745980   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:23.747161   26778 pod_ready.go:92] pod "etcd-ha-058855-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:23.747177   26778 pod_ready.go:81] duration metric: took 7.510686398s for pod "etcd-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.747195   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.747244   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-058855
	I0429 19:03:23.747252   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.747259   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.747264   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.750007   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:23.750784   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:23.750798   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.750804   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.750808   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.753646   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:23.754347   26778 pod_ready.go:92] pod "kube-apiserver-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:23.754369   26778 pod_ready.go:81] duration metric: took 7.166746ms for pod "kube-apiserver-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.754382   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.754449   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-058855-m02
	I0429 19:03:23.754461   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.754470   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.754480   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.757583   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:23.758348   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:23.758369   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.758379   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.758386   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.761583   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:23.762376   26778 pod_ready.go:92] pod "kube-apiserver-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:23.762403   26778 pod_ready.go:81] duration metric: took 8.008595ms for pod "kube-apiserver-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.762416   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.762477   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-058855-m03
	I0429 19:03:23.762489   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.762498   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.762506   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.765600   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:23.785577   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:23.785600   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.785614   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.785624   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.789644   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:23.790113   26778 pod_ready.go:92] pod "kube-apiserver-ha-058855-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:23.790135   26778 pod_ready.go:81] duration metric: took 27.710177ms for pod "kube-apiserver-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.790152   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.985599   26778 request.go:629] Waited for 195.362216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855
	I0429 19:03:23.985743   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855
	I0429 19:03:23.985760   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.985770   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.985780   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.990565   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:24.185616   26778 request.go:629] Waited for 194.385468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:24.185685   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:24.185691   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:24.185698   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:24.185701   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:24.191403   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:03:24.192456   26778 pod_ready.go:92] pod "kube-controller-manager-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:24.192487   26778 pod_ready.go:81] duration metric: took 402.32346ms for pod "kube-controller-manager-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:24.192501   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:24.385491   26778 request.go:629] Waited for 192.913821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855-m02
	I0429 19:03:24.385587   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855-m02
	I0429 19:03:24.385600   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:24.385635   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:24.385649   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:24.389934   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:24.585411   26778 request.go:629] Waited for 194.33868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:24.585462   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:24.585467   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:24.585474   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:24.585480   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:24.589092   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:24.589922   26778 pod_ready.go:92] pod "kube-controller-manager-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:24.589940   26778 pod_ready.go:81] duration metric: took 397.432121ms for pod "kube-controller-manager-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:24.589950   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:24.785450   26778 request.go:629] Waited for 195.433354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855-m03
	I0429 19:03:24.785524   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855-m03
	I0429 19:03:24.785534   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:24.785546   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:24.785558   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:24.789190   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:24.985397   26778 request.go:629] Waited for 195.408341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:24.985451   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:24.985456   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:24.985464   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:24.985468   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:24.989538   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:24.990200   26778 pod_ready.go:92] pod "kube-controller-manager-ha-058855-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:24.990220   26778 pod_ready.go:81] duration metric: took 400.262823ms for pod "kube-controller-manager-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:24.990234   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29svc" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:25.185154   26778 request.go:629] Waited for 194.843168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29svc
	I0429 19:03:25.185213   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29svc
	I0429 19:03:25.185227   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:25.185239   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:25.185248   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:25.189244   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:25.385293   26778 request.go:629] Waited for 195.292348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:25.385381   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:25.385392   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:25.385402   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:25.385411   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:25.389467   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:25.390387   26778 pod_ready.go:92] pod "kube-proxy-29svc" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:25.390408   26778 pod_ready.go:81] duration metric: took 400.167281ms for pod "kube-proxy-29svc" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:25.390420   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nz2rv" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:25.585353   26778 request.go:629] Waited for 194.866158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz2rv
	I0429 19:03:25.585427   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz2rv
	I0429 19:03:25.585432   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:25.585445   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:25.585463   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:25.589742   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:25.785860   26778 request.go:629] Waited for 195.365291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:25.785937   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:25.785942   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:25.785950   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:25.785956   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:25.789868   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:25.790609   26778 pod_ready.go:92] pod "kube-proxy-nz2rv" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:25.790627   26778 pod_ready.go:81] duration metric: took 400.194931ms for pod "kube-proxy-nz2rv" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:25.790636   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xldlc" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:25.986077   26778 request.go:629] Waited for 195.357381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xldlc
	I0429 19:03:25.986136   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xldlc
	I0429 19:03:25.986141   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:25.986149   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:25.986154   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:25.990111   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:26.185751   26778 request.go:629] Waited for 194.862355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:26.185836   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:26.185850   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:26.185860   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:26.185868   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:26.190387   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:26.191230   26778 pod_ready.go:92] pod "kube-proxy-xldlc" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:26.191251   26778 pod_ready.go:81] duration metric: took 400.608193ms for pod "kube-proxy-xldlc" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:26.191261   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:26.385320   26778 request.go:629] Waited for 193.992199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855
	I0429 19:03:26.385421   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855
	I0429 19:03:26.385432   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:26.385444   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:26.385453   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:26.389560   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:26.585558   26778 request.go:629] Waited for 195.251013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:26.585606   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:26.585611   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:26.585618   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:26.585621   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:26.589363   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:26.590528   26778 pod_ready.go:92] pod "kube-scheduler-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:26.590547   26778 pod_ready.go:81] duration metric: took 399.280221ms for pod "kube-scheduler-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:26.590556   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:26.785667   26778 request.go:629] Waited for 195.046202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855-m02
	I0429 19:03:26.785754   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855-m02
	I0429 19:03:26.785760   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:26.785777   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:26.785792   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:26.790042   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:26.985182   26778 request.go:629] Waited for 194.237698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:26.985263   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:26.985275   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:26.985285   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:26.985293   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:26.989513   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:26.990273   26778 pod_ready.go:92] pod "kube-scheduler-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:26.990291   26778 pod_ready.go:81] duration metric: took 399.728731ms for pod "kube-scheduler-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:26.990312   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:27.185770   26778 request.go:629] Waited for 195.380719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855-m03
	I0429 19:03:27.185863   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855-m03
	I0429 19:03:27.185874   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:27.185886   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:27.185895   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:27.189595   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:27.385739   26778 request.go:629] Waited for 195.383138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:27.385818   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:27.385828   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:27.385838   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:27.385849   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:27.389594   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:27.390414   26778 pod_ready.go:92] pod "kube-scheduler-ha-058855-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:27.390438   26778 pod_ready.go:81] duration metric: took 400.115122ms for pod "kube-scheduler-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:27.390451   26778 pod_ready.go:38] duration metric: took 11.20048647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:03:27.390463   26778 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:03:27.390512   26778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:03:27.408969   26778 api_server.go:72] duration metric: took 18.97418101s to wait for apiserver process to appear ...
	I0429 19:03:27.408993   26778 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:03:27.409017   26778 api_server.go:253] Checking apiserver healthz at https://192.168.39.52:8443/healthz ...
	I0429 19:03:27.415338   26778 api_server.go:279] https://192.168.39.52:8443/healthz returned 200:
	ok
	I0429 19:03:27.415400   26778 round_trippers.go:463] GET https://192.168.39.52:8443/version
	I0429 19:03:27.415407   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:27.415414   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:27.415418   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:27.416436   26778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 19:03:27.416557   26778 api_server.go:141] control plane version: v1.30.0
	I0429 19:03:27.416577   26778 api_server.go:131] duration metric: took 7.576605ms to wait for apiserver health ...
	I0429 19:03:27.416587   26778 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:03:27.586019   26778 request.go:629] Waited for 169.347655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:03:27.586101   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:03:27.586109   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:27.586117   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:27.586126   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:27.593529   26778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:03:27.601873   26778 system_pods.go:59] 24 kube-system pods found
	I0429 19:03:27.601901   26778 system_pods.go:61] "coredns-7db6d8ff4d-bbq9x" [a016fbf8-4a91-4f2f-97da-44b6e2195885] Running
	I0429 19:03:27.601906   26778 system_pods.go:61] "coredns-7db6d8ff4d-njch8" [823d223d-f7bd-4b9c-bdd9-8d0ae063d449] Running
	I0429 19:03:27.601911   26778 system_pods.go:61] "etcd-ha-058855" [a7e579b9-771a-4bb2-819b-a98848f52b09] Running
	I0429 19:03:27.601914   26778 system_pods.go:61] "etcd-ha-058855-m02" [08e98635-58d8-460b-9432-4bb03c74099c] Running
	I0429 19:03:27.601917   26778 system_pods.go:61] "etcd-ha-058855-m03" [829b8eb9-5772-4861-9de4-57e88f869a71] Running
	I0429 19:03:27.601920   26778 system_pods.go:61] "kindnet-j42cd" [13d10343-b59f-490f-ac7c-973271cc27d2] Running
	I0429 19:03:27.601923   26778 system_pods.go:61] "kindnet-m4fgv" [be3e3c54-e4e3-42ff-8433-1411fbd7ef75] Running
	I0429 19:03:27.601925   26778 system_pods.go:61] "kindnet-xdtp4" [510a69a6-5bd3-44ba-a81f-6d35a38b6ad2] Running
	I0429 19:03:27.601928   26778 system_pods.go:61] "kube-apiserver-ha-058855" [d2eb7bde-88b9-4366-be20-593097820579] Running
	I0429 19:03:27.601931   26778 system_pods.go:61] "kube-apiserver-ha-058855-m02" [94599f7a-b9de-4db3-b858-a380793bbd34] Running
	I0429 19:03:27.601934   26778 system_pods.go:61] "kube-apiserver-ha-058855-m03" [db757bbb-f7b3-472f-a22a-7b828d6fa543] Running
	I0429 19:03:27.601938   26778 system_pods.go:61] "kube-controller-manager-ha-058855" [56527f4a-57d1-4a44-be01-7747abcbfce0] Running
	I0429 19:03:27.601941   26778 system_pods.go:61] "kube-controller-manager-ha-058855-m02" [201796e2-157c-40ce-bf68-c2472bab9e3a] Running
	I0429 19:03:27.601945   26778 system_pods.go:61] "kube-controller-manager-ha-058855-m03" [a8046d54-c4bf-4152-b27a-19555664e7de] Running
	I0429 19:03:27.601948   26778 system_pods.go:61] "kube-proxy-29svc" [1c889e3e-7390-4e06-8bf3-424117496b4b] Running
	I0429 19:03:27.601952   26778 system_pods.go:61] "kube-proxy-nz2rv" [32002a66-d55f-4011-bb78-c4c6e35238b3] Running
	I0429 19:03:27.601957   26778 system_pods.go:61] "kube-proxy-xldlc" [a01564cb-ea76-4cc5-abad-d2d70b79bf6d] Running
	I0429 19:03:27.601960   26778 system_pods.go:61] "kube-scheduler-ha-058855" [d71e876d-d5be-4671-924b-3fd828de92a1] Running
	I0429 19:03:27.601963   26778 system_pods.go:61] "kube-scheduler-ha-058855-m02" [69bbddf9-e5f6-4ede-abd0-762b0642fda4] Running
	I0429 19:03:27.601967   26778 system_pods.go:61] "kube-scheduler-ha-058855-m03" [7d259b08-e0c4-4424-bc8f-1171f5fe7739] Running
	I0429 19:03:27.601973   26778 system_pods.go:61] "kube-vip-ha-058855" [76e512c7-e0ea-417e-8239-63bb073dc04d] Running
	I0429 19:03:27.601975   26778 system_pods.go:61] "kube-vip-ha-058855-m02" [1569a60d-d6a1-4685-8405-689270322b97] Running
	I0429 19:03:27.601979   26778 system_pods.go:61] "kube-vip-ha-058855-m03" [aa222d89-ec33-45a5-b1f4-296e4b89c4b7] Running
	I0429 19:03:27.601982   26778 system_pods.go:61] "storage-provisioner" [1572f7da-1bda-4b9e-a5fc-315aae3ba592] Running
	I0429 19:03:27.601988   26778 system_pods.go:74] duration metric: took 185.395278ms to wait for pod list to return data ...
	I0429 19:03:27.601998   26778 default_sa.go:34] waiting for default service account to be created ...
	I0429 19:03:27.785435   26778 request.go:629] Waited for 183.349656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:03:27.785499   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:03:27.785504   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:27.785512   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:27.785516   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:27.790928   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:03:27.791073   26778 default_sa.go:45] found service account: "default"
	I0429 19:03:27.791093   26778 default_sa.go:55] duration metric: took 189.089492ms for default service account to be created ...
	I0429 19:03:27.791105   26778 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 19:03:27.985568   26778 request.go:629] Waited for 194.356514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:03:27.985643   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:03:27.985648   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:27.985656   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:27.985660   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:27.992905   26778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:03:28.001131   26778 system_pods.go:86] 24 kube-system pods found
	I0429 19:03:28.001170   26778 system_pods.go:89] "coredns-7db6d8ff4d-bbq9x" [a016fbf8-4a91-4f2f-97da-44b6e2195885] Running
	I0429 19:03:28.001179   26778 system_pods.go:89] "coredns-7db6d8ff4d-njch8" [823d223d-f7bd-4b9c-bdd9-8d0ae063d449] Running
	I0429 19:03:28.001185   26778 system_pods.go:89] "etcd-ha-058855" [a7e579b9-771a-4bb2-819b-a98848f52b09] Running
	I0429 19:03:28.001192   26778 system_pods.go:89] "etcd-ha-058855-m02" [08e98635-58d8-460b-9432-4bb03c74099c] Running
	I0429 19:03:28.001198   26778 system_pods.go:89] "etcd-ha-058855-m03" [829b8eb9-5772-4861-9de4-57e88f869a71] Running
	I0429 19:03:28.001206   26778 system_pods.go:89] "kindnet-j42cd" [13d10343-b59f-490f-ac7c-973271cc27d2] Running
	I0429 19:03:28.001212   26778 system_pods.go:89] "kindnet-m4fgv" [be3e3c54-e4e3-42ff-8433-1411fbd7ef75] Running
	I0429 19:03:28.001218   26778 system_pods.go:89] "kindnet-xdtp4" [510a69a6-5bd3-44ba-a81f-6d35a38b6ad2] Running
	I0429 19:03:28.001224   26778 system_pods.go:89] "kube-apiserver-ha-058855" [d2eb7bde-88b9-4366-be20-593097820579] Running
	I0429 19:03:28.001230   26778 system_pods.go:89] "kube-apiserver-ha-058855-m02" [94599f7a-b9de-4db3-b858-a380793bbd34] Running
	I0429 19:03:28.001237   26778 system_pods.go:89] "kube-apiserver-ha-058855-m03" [db757bbb-f7b3-472f-a22a-7b828d6fa543] Running
	I0429 19:03:28.001243   26778 system_pods.go:89] "kube-controller-manager-ha-058855" [56527f4a-57d1-4a44-be01-7747abcbfce0] Running
	I0429 19:03:28.001255   26778 system_pods.go:89] "kube-controller-manager-ha-058855-m02" [201796e2-157c-40ce-bf68-c2472bab9e3a] Running
	I0429 19:03:28.001263   26778 system_pods.go:89] "kube-controller-manager-ha-058855-m03" [a8046d54-c4bf-4152-b27a-19555664e7de] Running
	I0429 19:03:28.001280   26778 system_pods.go:89] "kube-proxy-29svc" [1c889e3e-7390-4e06-8bf3-424117496b4b] Running
	I0429 19:03:28.001287   26778 system_pods.go:89] "kube-proxy-nz2rv" [32002a66-d55f-4011-bb78-c4c6e35238b3] Running
	I0429 19:03:28.001293   26778 system_pods.go:89] "kube-proxy-xldlc" [a01564cb-ea76-4cc5-abad-d2d70b79bf6d] Running
	I0429 19:03:28.001303   26778 system_pods.go:89] "kube-scheduler-ha-058855" [d71e876d-d5be-4671-924b-3fd828de92a1] Running
	I0429 19:03:28.001309   26778 system_pods.go:89] "kube-scheduler-ha-058855-m02" [69bbddf9-e5f6-4ede-abd0-762b0642fda4] Running
	I0429 19:03:28.001315   26778 system_pods.go:89] "kube-scheduler-ha-058855-m03" [7d259b08-e0c4-4424-bc8f-1171f5fe7739] Running
	I0429 19:03:28.001325   26778 system_pods.go:89] "kube-vip-ha-058855" [76e512c7-e0ea-417e-8239-63bb073dc04d] Running
	I0429 19:03:28.001331   26778 system_pods.go:89] "kube-vip-ha-058855-m02" [1569a60d-d6a1-4685-8405-689270322b97] Running
	I0429 19:03:28.001340   26778 system_pods.go:89] "kube-vip-ha-058855-m03" [aa222d89-ec33-45a5-b1f4-296e4b89c4b7] Running
	I0429 19:03:28.001346   26778 system_pods.go:89] "storage-provisioner" [1572f7da-1bda-4b9e-a5fc-315aae3ba592] Running
	I0429 19:03:28.001359   26778 system_pods.go:126] duration metric: took 210.243362ms to wait for k8s-apps to be running ...
	I0429 19:03:28.001370   26778 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 19:03:28.001424   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:03:28.018131   26778 system_svc.go:56] duration metric: took 16.748659ms WaitForService to wait for kubelet
	I0429 19:03:28.018167   26778 kubeadm.go:576] duration metric: took 19.583380603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:03:28.018189   26778 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:03:28.185610   26778 request.go:629] Waited for 167.343861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes
	I0429 19:03:28.185695   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes
	I0429 19:03:28.185704   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:28.185717   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:28.185725   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:28.190267   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:28.191669   26778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:03:28.191695   26778 node_conditions.go:123] node cpu capacity is 2
	I0429 19:03:28.191709   26778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:03:28.191714   26778 node_conditions.go:123] node cpu capacity is 2
	I0429 19:03:28.191718   26778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:03:28.191722   26778 node_conditions.go:123] node cpu capacity is 2
	I0429 19:03:28.191731   26778 node_conditions.go:105] duration metric: took 173.532452ms to run NodePressure ...
	I0429 19:03:28.191750   26778 start.go:240] waiting for startup goroutines ...
	I0429 19:03:28.191774   26778 start.go:254] writing updated cluster config ...
	I0429 19:03:28.192169   26778 ssh_runner.go:195] Run: rm -f paused
	I0429 19:03:28.245165   26778 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 19:03:28.247274   26778 out.go:177] * Done! kubectl is now configured to use "ha-058855" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.895067575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714417620895041143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba936531-95c9-488c-ad41-8fbb0dd292bd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.896112648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba462956-a260-42bb-ae59-3fe0cfd74753 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.896190674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba462956-a260-42bb-ae59-3fe0cfd74753 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.896473045Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417414458881064,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9fee3659bbbc0cfcb39700e786b8abaca5828c3a369213c71f8c24aead35f1,PodSandboxId:7535117780f63199f4d557275f58c4dbd45457c95f56a37f6dc4909ddb1934dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714417187571512441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187512853039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187459474931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a
91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ea9216c1d7c2ce6fc652bc1f2020e90ddd86266e6494480d19d53d424bfc01,PodSandboxId:99a43785ac56c5dd7e66b63e069f2b805e50ab4d83c6949997dd6ae7806b297e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17144171
84995953953,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417184691594429,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45ced81842ab99aabac98f2ac5d6e1b110a73465d11e56c87d6166d153839862,PodSandboxId:092f8bef902efe571a7c4bb49769bc4109d8855d291b7678d17ea4c9ea1e72fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417166093403108,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab58bfc4970fad85a73d065ba4eec99e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417163290366549,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9513857b60ae4b75efae6de6be9d83d589f9d511ba539d01bc7e371a1a0e090,PodSandboxId:d5c792e26a63f5182b337b3916dad1dff032b53207ab9bc1da61cbaee803b342,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417163246853598,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9139aba22c80eaaf47d55790db8284fc4c3d959ba23904a36880d4d936f4622,PodSandboxId:5dc22f2ba00277c3f8923983e3b802392c4264210a68e2e15c1e7fae5c399b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417163227484503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417163188534709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba462956-a260-42bb-ae59-3fe0cfd74753 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.942034702Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe7ca343-5e75-475f-9b12-852c5d457305 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.942121968Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe7ca343-5e75-475f-9b12-852c5d457305 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.944418857Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7916cc16-4075-4f25-8afa-6ae5d2b82443 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.945612080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714417620945506505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7916cc16-4075-4f25-8afa-6ae5d2b82443 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.946467165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02f3dca0-f846-475e-852e-183e62f7c8bd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.946551116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02f3dca0-f846-475e-852e-183e62f7c8bd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.946917344Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417414458881064,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9fee3659bbbc0cfcb39700e786b8abaca5828c3a369213c71f8c24aead35f1,PodSandboxId:7535117780f63199f4d557275f58c4dbd45457c95f56a37f6dc4909ddb1934dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714417187571512441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187512853039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187459474931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a
91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ea9216c1d7c2ce6fc652bc1f2020e90ddd86266e6494480d19d53d424bfc01,PodSandboxId:99a43785ac56c5dd7e66b63e069f2b805e50ab4d83c6949997dd6ae7806b297e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17144171
84995953953,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417184691594429,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45ced81842ab99aabac98f2ac5d6e1b110a73465d11e56c87d6166d153839862,PodSandboxId:092f8bef902efe571a7c4bb49769bc4109d8855d291b7678d17ea4c9ea1e72fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417166093403108,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab58bfc4970fad85a73d065ba4eec99e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417163290366549,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9513857b60ae4b75efae6de6be9d83d589f9d511ba539d01bc7e371a1a0e090,PodSandboxId:d5c792e26a63f5182b337b3916dad1dff032b53207ab9bc1da61cbaee803b342,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417163246853598,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9139aba22c80eaaf47d55790db8284fc4c3d959ba23904a36880d4d936f4622,PodSandboxId:5dc22f2ba00277c3f8923983e3b802392c4264210a68e2e15c1e7fae5c399b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417163227484503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417163188534709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02f3dca0-f846-475e-852e-183e62f7c8bd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.995644158Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7f1c6db-c199-4f94-a1b4-ac8bc489d553 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.995944683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7f1c6db-c199-4f94-a1b4-ac8bc489d553 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.997629294Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ad6549a-ae46-47b5-8a5c-8ce8681a0e85 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.998242829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714417620998218519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ad6549a-ae46-47b5-8a5c-8ce8681a0e85 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.998919569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8ef218b-820a-4405-bb15-05e099d55098 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.998998625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8ef218b-820a-4405-bb15-05e099d55098 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:07:00 ha-058855 crio[682]: time="2024-04-29 19:07:00.999250318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417414458881064,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9fee3659bbbc0cfcb39700e786b8abaca5828c3a369213c71f8c24aead35f1,PodSandboxId:7535117780f63199f4d557275f58c4dbd45457c95f56a37f6dc4909ddb1934dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714417187571512441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187512853039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187459474931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a
91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ea9216c1d7c2ce6fc652bc1f2020e90ddd86266e6494480d19d53d424bfc01,PodSandboxId:99a43785ac56c5dd7e66b63e069f2b805e50ab4d83c6949997dd6ae7806b297e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17144171
84995953953,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417184691594429,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45ced81842ab99aabac98f2ac5d6e1b110a73465d11e56c87d6166d153839862,PodSandboxId:092f8bef902efe571a7c4bb49769bc4109d8855d291b7678d17ea4c9ea1e72fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417166093403108,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab58bfc4970fad85a73d065ba4eec99e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417163290366549,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9513857b60ae4b75efae6de6be9d83d589f9d511ba539d01bc7e371a1a0e090,PodSandboxId:d5c792e26a63f5182b337b3916dad1dff032b53207ab9bc1da61cbaee803b342,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417163246853598,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9139aba22c80eaaf47d55790db8284fc4c3d959ba23904a36880d4d936f4622,PodSandboxId:5dc22f2ba00277c3f8923983e3b802392c4264210a68e2e15c1e7fae5c399b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417163227484503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417163188534709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8ef218b-820a-4405-bb15-05e099d55098 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:07:01 ha-058855 crio[682]: time="2024-04-29 19:07:01.048814494Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=137be3dd-1060-4b1c-bb2d-b9f188142f0b name=/runtime.v1.RuntimeService/Version
	Apr 29 19:07:01 ha-058855 crio[682]: time="2024-04-29 19:07:01.048913050Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=137be3dd-1060-4b1c-bb2d-b9f188142f0b name=/runtime.v1.RuntimeService/Version
	Apr 29 19:07:01 ha-058855 crio[682]: time="2024-04-29 19:07:01.050599224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3598001c-cb09-4a8b-bb91-8afd4a9bd1ec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:07:01 ha-058855 crio[682]: time="2024-04-29 19:07:01.051231224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714417621051199848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3598001c-cb09-4a8b-bb91-8afd4a9bd1ec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:07:01 ha-058855 crio[682]: time="2024-04-29 19:07:01.051923766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bbf4905-e0ba-4e0f-a805-4a578683801a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:07:01 ha-058855 crio[682]: time="2024-04-29 19:07:01.052008356Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bbf4905-e0ba-4e0f-a805-4a578683801a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:07:01 ha-058855 crio[682]: time="2024-04-29 19:07:01.052247838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417414458881064,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9fee3659bbbc0cfcb39700e786b8abaca5828c3a369213c71f8c24aead35f1,PodSandboxId:7535117780f63199f4d557275f58c4dbd45457c95f56a37f6dc4909ddb1934dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714417187571512441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187512853039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187459474931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a
91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ea9216c1d7c2ce6fc652bc1f2020e90ddd86266e6494480d19d53d424bfc01,PodSandboxId:99a43785ac56c5dd7e66b63e069f2b805e50ab4d83c6949997dd6ae7806b297e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17144171
84995953953,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417184691594429,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45ced81842ab99aabac98f2ac5d6e1b110a73465d11e56c87d6166d153839862,PodSandboxId:092f8bef902efe571a7c4bb49769bc4109d8855d291b7678d17ea4c9ea1e72fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417166093403108,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab58bfc4970fad85a73d065ba4eec99e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417163290366549,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9513857b60ae4b75efae6de6be9d83d589f9d511ba539d01bc7e371a1a0e090,PodSandboxId:d5c792e26a63f5182b337b3916dad1dff032b53207ab9bc1da61cbaee803b342,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417163246853598,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9139aba22c80eaaf47d55790db8284fc4c3d959ba23904a36880d4d936f4622,PodSandboxId:5dc22f2ba00277c3f8923983e3b802392c4264210a68e2e15c1e7fae5c399b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417163227484503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417163188534709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6bbf4905-e0ba-4e0f-a805-4a578683801a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3ebcb4aac0715       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   5d6b9a26ffca4       busybox-fc5497c4f-nst7c
	db9fee3659bbb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   7535117780f63       storage-provisioner
	35b38d136f10c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   27fc4fec5e3f0       coredns-7db6d8ff4d-njch8
	db099f7f56f78       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   1050f7bafa98e       coredns-7db6d8ff4d-bbq9x
	38ea9216c1d7c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   99a43785ac56c       kindnet-j42cd
	2e3b2e1683b77       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago       Running             kube-proxy                0                   fe7fa96de2987       kube-proxy-xldlc
	45ced81842ab9       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   092f8bef902ef       kube-vip-ha-058855
	3c1cf7e86cc05       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago       Running             kube-scheduler            0                   eaa9cff42f55b       kube-scheduler-ha-058855
	d9513857b60ae       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago       Running             kube-controller-manager   0                   d5c792e26a63f       kube-controller-manager-ha-058855
	d9139aba22c80       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago       Running             kube-apiserver            0                   5dc22f2ba0027       kube-apiserver-ha-058855
	f653b7a6c4efb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   40b3f5ad731ff       etcd-ha-058855
	
	
	==> coredns [35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b] <==
	[INFO] 10.244.2.2:42994 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000095231s
	[INFO] 10.244.0.4:59286 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.018415771s
	[INFO] 10.244.0.4:34309 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225287s
	[INFO] 10.244.0.4:56402 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167209s
	[INFO] 10.244.1.2:40060 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001736403s
	[INFO] 10.244.1.2:46625 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114006s
	[INFO] 10.244.1.2:57265 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118743s
	[INFO] 10.244.1.2:34075 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000376654s
	[INFO] 10.244.1.2:37316 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000287017s
	[INFO] 10.244.2.2:55857 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148708s
	[INFO] 10.244.2.2:34046 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114435s
	[INFO] 10.244.2.2:59123 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013463s
	[INFO] 10.244.0.4:52788 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139069s
	[INFO] 10.244.0.4:54898 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174069s
	[INFO] 10.244.0.4:50441 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004412s
	[INFO] 10.244.1.2:34029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183007s
	[INFO] 10.244.1.2:34413 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011174s
	[INFO] 10.244.1.2:46424 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144489s
	[INFO] 10.244.1.2:35983 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116269s
	[INFO] 10.244.2.2:36513 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000459857s
	[INFO] 10.244.0.4:40033 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000351605s
	[INFO] 10.244.0.4:45496 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128261s
	[INFO] 10.244.1.2:58777 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000204086s
	[INFO] 10.244.2.2:46697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227863s
	[INFO] 10.244.2.2:60992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138077s
	
	
	==> coredns [db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe] <==
	[INFO] 10.244.2.2:38010 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00188289s
	[INFO] 10.244.0.4:49486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160106s
	[INFO] 10.244.0.4:50702 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002836903s
	[INFO] 10.244.0.4:35661 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120275s
	[INFO] 10.244.0.4:59999 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127179s
	[INFO] 10.244.0.4:38237 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178889s
	[INFO] 10.244.1.2:51028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274871s
	[INFO] 10.244.1.2:44471 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001330026s
	[INFO] 10.244.1.2:42432 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122996s
	[INFO] 10.244.2.2:59580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000294012s
	[INFO] 10.244.2.2:60659 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00179161s
	[INFO] 10.244.2.2:39549 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000317743s
	[INFO] 10.244.2.2:43315 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001176961s
	[INFO] 10.244.2.2:32992 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190177s
	[INFO] 10.244.0.4:46409 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000047581s
	[INFO] 10.244.2.2:53037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141835s
	[INFO] 10.244.2.2:44640 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000203835s
	[INFO] 10.244.2.2:58171 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090591s
	[INFO] 10.244.0.4:44158 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106787s
	[INFO] 10.244.0.4:57643 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000199048s
	[INFO] 10.244.1.2:57285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127384s
	[INFO] 10.244.1.2:53223 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223061s
	[INFO] 10.244.1.2:54113 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106292s
	[INFO] 10.244.2.2:57470 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012081s
	[INFO] 10.244.2.2:35174 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139962s
	
	
	==> describe nodes <==
	Name:               ha-058855
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T18_59_30_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 18:59:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:06:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:04:05 +0000   Mon, 29 Apr 2024 18:59:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:04:05 +0000   Mon, 29 Apr 2024 18:59:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:04:05 +0000   Mon, 29 Apr 2024 18:59:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:04:05 +0000   Mon, 29 Apr 2024 18:59:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    ha-058855
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4dd245ae2fbf4ffeb364af3ff6801808
	  System UUID:                4dd245ae-2fbf-4ffe-b364-af3ff6801808
	  Boot ID:                    41ab0acc-a7d3-4500-bada-adc41451a660
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nst7c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 coredns-7db6d8ff4d-bbq9x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m19s
	  kube-system                 coredns-7db6d8ff4d-njch8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m19s
	  kube-system                 etcd-ha-058855                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m32s
	  kube-system                 kindnet-j42cd                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m19s
	  kube-system                 kube-apiserver-ha-058855             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-controller-manager-ha-058855    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-proxy-xldlc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-scheduler-ha-058855             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 kube-vip-ha-058855                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m16s  kube-proxy       
	  Normal  Starting                 7m32s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m32s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m32s  kubelet          Node ha-058855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m32s  kubelet          Node ha-058855 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m32s  kubelet          Node ha-058855 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m20s  node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Normal  NodeReady                7m15s  kubelet          Node ha-058855 status is now: NodeReady
	  Normal  RegisteredNode           4m56s  node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Normal  RegisteredNode           3m39s  node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	
	
	Name:               ha-058855-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_01_50_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:01:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:04:30 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 19:03:49 +0000   Mon, 29 Apr 2024 19:05:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 19:03:49 +0000   Mon, 29 Apr 2024 19:05:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 19:03:49 +0000   Mon, 29 Apr 2024 19:05:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 19:03:49 +0000   Mon, 29 Apr 2024 19:05:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-058855-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea727b7dfb674d998bb0a6c08dea140b
	  System UUID:                ea727b7d-fb67-4d99-8bb0-a6c08dea140b
	  Boot ID:                    990bbec7-ab66-4e93-ab63-93c34ed99031
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pr84n                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-058855-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m12s
	  kube-system                 kindnet-xdtp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m14s
	  kube-system                 kube-apiserver-ha-058855-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-controller-manager-ha-058855-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-proxy-nz2rv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-scheduler-ha-058855-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-vip-ha-058855-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m14s (x8 over 5m14s)  kubelet          Node ha-058855-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m14s (x8 over 5m14s)  kubelet          Node ha-058855-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m14s (x7 over 5m14s)  kubelet          Node ha-058855-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m10s                  node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  NodeNotReady             110s                   node-controller  Node ha-058855-m02 status is now: NodeNotReady
	
	
	Name:               ha-058855-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_03_08_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:03:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:06:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:04:04 +0000   Mon, 29 Apr 2024 19:03:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:04:04 +0000   Mon, 29 Apr 2024 19:03:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:04:04 +0000   Mon, 29 Apr 2024 19:03:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:04:04 +0000   Mon, 29 Apr 2024 19:03:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    ha-058855-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b6bc3a75b3f42f3aa365abccb76fd49
	  System UUID:                5b6bc3a7-5b3f-42f3-aa36-5abccb76fd49
	  Boot ID:                    012bcf6a-21fa-44f5-99a3-07d973e32c6e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xll26                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-058855-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m56s
	  kube-system                 kindnet-m4fgv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m58s
	  kube-system                 kube-apiserver-ha-058855-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-controller-manager-ha-058855-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-proxy-29svc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  kube-system                 kube-scheduler-ha-058855-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-vip-ha-058855-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m58s (x8 over 3m58s)  kubelet          Node ha-058855-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m58s (x8 over 3m58s)  kubelet          Node ha-058855-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m58s (x7 over 3m58s)  kubelet          Node ha-058855-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-058855-m03 event: Registered Node ha-058855-m03 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-058855-m03 event: Registered Node ha-058855-m03 in Controller
	  Normal  RegisteredNode           3m39s                  node-controller  Node ha-058855-m03 event: Registered Node ha-058855-m03 in Controller
	
	
	Name:               ha-058855-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_04_09_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:04:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:06:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:04:38 +0000   Mon, 29 Apr 2024 19:04:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:04:38 +0000   Mon, 29 Apr 2024 19:04:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:04:38 +0000   Mon, 29 Apr 2024 19:04:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:04:38 +0000   Mon, 29 Apr 2024 19:04:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    ha-058855-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbc9ec7037144061a802010c8eaa7400
	  System UUID:                fbc9ec70-3714-4061-a802-010c8eaa7400
	  Boot ID:                    78cd3cac-98fc-427e-a5a6-f22c652ad17c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8mzbn       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m53s
	  kube-system                 kube-proxy-7qjvk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x3 over 2m54s)  kubelet          Node ha-058855-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x3 over 2m54s)  kubelet          Node ha-058855-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x3 over 2m54s)  kubelet          Node ha-058855-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal  RegisteredNode           2m50s                  node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal  NodeReady                2m43s                  kubelet          Node ha-058855-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr29 18:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053006] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043670] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.664189] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.502838] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Apr29 18:59] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.235737] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.063053] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066472] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.176661] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.148881] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.312890] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.946074] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.072175] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.019108] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +1.004098] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.172368] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +0.079206] kauditd_printk_skb: 30 callbacks suppressed
	[ +15.239291] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.268922] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067] <==
	{"level":"warn","ts":"2024-04-29T19:07:01.3652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.374115Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.378705Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.399702Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.40998Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.411736Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.419684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.425045Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.432251Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.442311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.453879Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.458269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.46116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.468297Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.472301Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.476016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.484156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.493475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.500503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.504291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.508543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.510236Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.514672Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.520706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:07:01.527656Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:07:01 up 8 min,  0 users,  load average: 0.46, 0.31, 0.15
	Linux ha-058855 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [38ea9216c1d7c2ce6fc652bc1f2020e90ddd86266e6494480d19d53d424bfc01] <==
	I0429 19:06:27.177656       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:06:37.186662       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:06:37.186952       1 main.go:227] handling current node
	I0429 19:06:37.187044       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:06:37.187071       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:06:37.187186       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0429 19:06:37.187215       1 main.go:250] Node ha-058855-m03 has CIDR [10.244.2.0/24] 
	I0429 19:06:37.187278       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:06:37.187296       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:06:47.203192       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:06:47.203284       1 main.go:227] handling current node
	I0429 19:06:47.203312       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:06:47.203439       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:06:47.203618       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0429 19:06:47.203662       1 main.go:250] Node ha-058855-m03 has CIDR [10.244.2.0/24] 
	I0429 19:06:47.203736       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:06:47.203813       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:06:57.221863       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:06:57.221911       1 main.go:227] handling current node
	I0429 19:06:57.221923       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:06:57.221929       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:06:57.222033       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0429 19:06:57.222073       1 main.go:250] Node ha-058855-m03 has CIDR [10.244.2.0/24] 
	I0429 19:06:57.222148       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:06:57.222189       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d9139aba22c80eaaf47d55790db8284fc4c3d959ba23904a36880d4d936f4622] <==
	I0429 18:59:28.390043       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 18:59:28.407855       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.52]
	I0429 18:59:28.408978       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 18:59:28.410947       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 18:59:28.417872       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 18:59:29.459355       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 18:59:29.479068       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 18:59:29.655589       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 18:59:42.127669       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 18:59:42.419931       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 19:03:35.585563       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38490: use of closed network connection
	E0429 19:03:35.807350       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38508: use of closed network connection
	E0429 19:03:36.039102       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38524: use of closed network connection
	E0429 19:03:36.271511       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38544: use of closed network connection
	E0429 19:03:36.492521       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38574: use of closed network connection
	E0429 19:03:36.713236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38592: use of closed network connection
	E0429 19:03:36.917523       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38602: use of closed network connection
	E0429 19:03:37.139649       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38632: use of closed network connection
	E0429 19:03:37.355222       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38650: use of closed network connection
	E0429 19:03:37.703754       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38686: use of closed network connection
	E0429 19:03:37.912743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38696: use of closed network connection
	E0429 19:03:38.127598       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38728: use of closed network connection
	E0429 19:03:38.327424       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38744: use of closed network connection
	E0429 19:03:38.549472       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38772: use of closed network connection
	E0429 19:03:38.764153       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38782: use of closed network connection
	
	
	==> kube-controller-manager [d9513857b60ae4b75efae6de6be9d83d589f9d511ba539d01bc7e371a1a0e090] <==
	I0429 19:03:29.663258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="228.25256ms"
	E0429 19:03:29.663322       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0429 19:03:29.760748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.384701ms"
	I0429 19:03:29.760911       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.248µs"
	I0429 19:03:31.024689       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.2µs"
	I0429 19:03:31.038397       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.478µs"
	I0429 19:03:31.055444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.764µs"
	I0429 19:03:31.074101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="171.866µs"
	I0429 19:03:31.079683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.981µs"
	I0429 19:03:31.096642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.776µs"
	I0429 19:03:33.638022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.907954ms"
	I0429 19:03:33.638229       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.372µs"
	I0429 19:03:34.830238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.188601ms"
	I0429 19:03:34.830386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.355µs"
	I0429 19:03:35.050992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.136317ms"
	I0429 19:03:35.051123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.617µs"
	E0429 19:04:07.911968       1 certificate_controller.go:146] Sync csr-22bt5 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-22bt5": the object has been modified; please apply your changes to the latest version and try again
	E0429 19:04:08.174118       1 certificate_controller.go:146] Sync csr-22bt5 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-22bt5": the object has been modified; please apply your changes to the latest version and try again
	I0429 19:04:08.229381       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-058855-m04\" does not exist"
	I0429 19:04:08.315881       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-058855-m04" podCIDRs=["10.244.3.0/24"]
	I0429 19:04:11.763753       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-058855-m04"
	I0429 19:04:18.919387       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-058855-m04"
	I0429 19:05:11.789436       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-058855-m04"
	I0429 19:05:11.967898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.348112ms"
	I0429 19:05:11.968148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.897µs"
	
	
	==> kube-proxy [2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5] <==
	I0429 18:59:44.874421       1 server_linux.go:69] "Using iptables proxy"
	I0429 18:59:44.884463       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.52"]
	I0429 18:59:44.940495       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 18:59:44.940581       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 18:59:44.940611       1 server_linux.go:165] "Using iptables Proxier"
	I0429 18:59:44.947719       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 18:59:44.948063       1 server.go:872] "Version info" version="v1.30.0"
	I0429 18:59:44.948102       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 18:59:44.950174       1 config.go:192] "Starting service config controller"
	I0429 18:59:44.950198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 18:59:44.950218       1 config.go:101] "Starting endpoint slice config controller"
	I0429 18:59:44.950221       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 18:59:44.950870       1 config.go:319] "Starting node config controller"
	I0429 18:59:44.950879       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 18:59:45.050536       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 18:59:45.050601       1 shared_informer.go:320] Caches are synced for service config
	I0429 18:59:45.050926       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad] <==
	W0429 18:59:27.678007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 18:59:27.678057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 18:59:27.708155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 18:59:27.708516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 18:59:27.769910       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 18:59:27.770033       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 18:59:27.789498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 18:59:27.789723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 18:59:27.814415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 18:59:27.815351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 18:59:27.847043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 18:59:27.847480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 18:59:29.764635       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 19:03:03.809276       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-29svc\": pod kube-proxy-29svc is already assigned to node \"ha-058855-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-29svc" node="ha-058855-m03"
	E0429 19:03:03.809567       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1c889e3e-7390-4e06-8bf3-424117496b4b(kube-system/kube-proxy-29svc) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-29svc"
	E0429 19:03:03.809611       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-29svc\": pod kube-proxy-29svc is already assigned to node \"ha-058855-m03\"" pod="kube-system/kube-proxy-29svc"
	I0429 19:03:03.809678       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-29svc" node="ha-058855-m03"
	E0429 19:03:29.257363       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pr84n\": pod busybox-fc5497c4f-pr84n is already assigned to node \"ha-058855-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pr84n" node="ha-058855-m03"
	E0429 19:03:29.257496       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pr84n\": pod busybox-fc5497c4f-pr84n is already assigned to node \"ha-058855-m02\"" pod="default/busybox-fc5497c4f-pr84n"
	E0429 19:04:08.343596       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8mzbn\": pod kindnet-8mzbn is already assigned to node \"ha-058855-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-8mzbn" node="ha-058855-m04"
	E0429 19:04:08.343733       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8mzbn\": pod kindnet-8mzbn is already assigned to node \"ha-058855-m04\"" pod="kube-system/kindnet-8mzbn"
	E0429 19:04:08.353249       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7qjvk\": pod kube-proxy-7qjvk is already assigned to node \"ha-058855-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7qjvk" node="ha-058855-m04"
	E0429 19:04:08.353339       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ff88d6a4-0fb7-4aa1-afb1-808659755020(kube-system/kube-proxy-7qjvk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-7qjvk"
	E0429 19:04:08.353361       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7qjvk\": pod kube-proxy-7qjvk is already assigned to node \"ha-058855-m04\"" pod="kube-system/kube-proxy-7qjvk"
	I0429 19:04:08.353381       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7qjvk" node="ha-058855-m04"
	
	
	==> kubelet <==
	Apr 29 19:03:29 ha-058855 kubelet[1376]: E0429 19:03:29.329319    1376 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-058855" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-058855' and this object
	Apr 29 19:03:29 ha-058855 kubelet[1376]: I0429 19:03:29.396889    1376 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pmm\" (UniqueName: \"kubernetes.io/projected/e810c83c-cdd7-4072-b8e8-319fd5aa4daa-kube-api-access-25pmm\") pod \"busybox-fc5497c4f-nst7c\" (UID: \"e810c83c-cdd7-4072-b8e8-319fd5aa4daa\") " pod="default/busybox-fc5497c4f-nst7c"
	Apr 29 19:03:29 ha-058855 kubelet[1376]: E0429 19:03:29.622027    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:03:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:03:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:03:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:03:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:03:30 ha-058855 kubelet[1376]: E0429 19:03:30.559495    1376 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 19:03:30 ha-058855 kubelet[1376]: E0429 19:03:30.559554    1376 projected.go:200] Error preparing data for projected volume kube-api-access-25pmm for pod default/busybox-fc5497c4f-nst7c: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 19:03:30 ha-058855 kubelet[1376]: E0429 19:03:30.559696    1376 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e810c83c-cdd7-4072-b8e8-319fd5aa4daa-kube-api-access-25pmm podName:e810c83c-cdd7-4072-b8e8-319fd5aa4daa nodeName:}" failed. No retries permitted until 2024-04-29 19:03:31.059646288 +0000 UTC m=+241.631094052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-25pmm" (UniqueName: "kubernetes.io/projected/e810c83c-cdd7-4072-b8e8-319fd5aa4daa-kube-api-access-25pmm") pod "busybox-fc5497c4f-nst7c" (UID: "e810c83c-cdd7-4072-b8e8-319fd5aa4daa") : failed to sync configmap cache: timed out waiting for the condition
	Apr 29 19:04:29 ha-058855 kubelet[1376]: E0429 19:04:29.601992    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:04:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:04:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:04:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:04:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:05:29 ha-058855 kubelet[1376]: E0429 19:05:29.604695    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:05:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:05:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:05:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:05:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:06:29 ha-058855 kubelet[1376]: E0429 19:06:29.605485    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:06:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:06:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:06:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:06:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-058855 -n ha-058855
helpers_test.go:261: (dbg) Run:  kubectl --context ha-058855 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (62.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr: exit status 3 (3.198918029s)

                                                
                                                
-- stdout --
	ha-058855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-058855-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:07:06.357285   34475 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:07:06.357403   34475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:06.357415   34475 out.go:304] Setting ErrFile to fd 2...
	I0429 19:07:06.357419   34475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:06.357606   34475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:07:06.357770   34475 out.go:298] Setting JSON to false
	I0429 19:07:06.357798   34475 mustload.go:65] Loading cluster: ha-058855
	I0429 19:07:06.357847   34475 notify.go:220] Checking for updates...
	I0429 19:07:06.358257   34475 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:07:06.358273   34475 status.go:255] checking status of ha-058855 ...
	I0429 19:07:06.358707   34475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:06.358769   34475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:06.377220   34475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0429 19:07:06.377698   34475 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:06.378339   34475 main.go:141] libmachine: Using API Version  1
	I0429 19:07:06.378364   34475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:06.378753   34475 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:06.378973   34475 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:07:06.380743   34475 status.go:330] ha-058855 host status = "Running" (err=<nil>)
	I0429 19:07:06.380762   34475 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:06.381055   34475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:06.381098   34475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:06.396557   34475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46107
	I0429 19:07:06.396923   34475 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:06.397589   34475 main.go:141] libmachine: Using API Version  1
	I0429 19:07:06.397608   34475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:06.397974   34475 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:06.398156   34475 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:07:06.400743   34475 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:06.401134   34475 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:06.401159   34475 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:06.401304   34475 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:06.401620   34475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:06.401660   34475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:06.417631   34475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0429 19:07:06.418027   34475 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:06.418593   34475 main.go:141] libmachine: Using API Version  1
	I0429 19:07:06.418622   34475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:06.418929   34475 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:06.419141   34475 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:07:06.419294   34475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:06.419334   34475 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:07:06.421804   34475 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:06.422163   34475 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:06.422191   34475 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:06.422329   34475 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:07:06.422508   34475 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:07:06.422652   34475 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:07:06.422842   34475 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:07:06.510642   34475 ssh_runner.go:195] Run: systemctl --version
	I0429 19:07:06.518618   34475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:06.538250   34475 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:06.538282   34475 api_server.go:166] Checking apiserver status ...
	I0429 19:07:06.538332   34475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:06.557276   34475 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0429 19:07:06.569235   34475 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:06.569293   34475 ssh_runner.go:195] Run: ls
	I0429 19:07:06.576534   34475 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:06.581386   34475 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:06.581422   34475 status.go:422] ha-058855 apiserver status = Running (err=<nil>)
	I0429 19:07:06.581435   34475 status.go:257] ha-058855 status: &{Name:ha-058855 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:06.581456   34475 status.go:255] checking status of ha-058855-m02 ...
	I0429 19:07:06.581786   34475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:06.581824   34475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:06.597068   34475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46461
	I0429 19:07:06.597482   34475 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:06.597929   34475 main.go:141] libmachine: Using API Version  1
	I0429 19:07:06.597946   34475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:06.598311   34475 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:06.598532   34475 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:07:06.600073   34475 status.go:330] ha-058855-m02 host status = "Running" (err=<nil>)
	I0429 19:07:06.600091   34475 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:07:06.600367   34475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:06.600400   34475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:06.615936   34475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0429 19:07:06.616481   34475 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:06.617117   34475 main.go:141] libmachine: Using API Version  1
	I0429 19:07:06.617146   34475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:06.617487   34475 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:06.617697   34475 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:07:06.620558   34475 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:06.621043   34475 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:07:06.621077   34475 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:06.621233   34475 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:07:06.621608   34475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:06.621647   34475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:06.636627   34475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39505
	I0429 19:07:06.637081   34475 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:06.637517   34475 main.go:141] libmachine: Using API Version  1
	I0429 19:07:06.637543   34475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:06.637842   34475 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:06.638055   34475 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:07:06.638277   34475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:06.638299   34475 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:07:06.641049   34475 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:06.641604   34475 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:07:06.641652   34475 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:06.641757   34475 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:07:06.641930   34475 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:07:06.642111   34475 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:07:06.642252   34475 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	W0429 19:07:09.130485   34475 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0429 19:07:09.130584   34475 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0429 19:07:09.130606   34475 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:09.130621   34475 status.go:257] ha-058855-m02 status: &{Name:ha-058855-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 19:07:09.130641   34475 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:09.130649   34475 status.go:255] checking status of ha-058855-m03 ...
	I0429 19:07:09.131022   34475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:09.131062   34475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:09.146937   34475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40415
	I0429 19:07:09.147370   34475 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:09.147839   34475 main.go:141] libmachine: Using API Version  1
	I0429 19:07:09.147856   34475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:09.148161   34475 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:09.148346   34475 main.go:141] libmachine: (ha-058855-m03) Calling .GetState
	I0429 19:07:09.149909   34475 status.go:330] ha-058855-m03 host status = "Running" (err=<nil>)
	I0429 19:07:09.149926   34475 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:09.150290   34475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:09.150327   34475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:09.165601   34475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0429 19:07:09.166039   34475 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:09.166487   34475 main.go:141] libmachine: Using API Version  1
	I0429 19:07:09.166510   34475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:09.166840   34475 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:09.167010   34475 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:07:09.170054   34475 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:09.170568   34475 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:09.170606   34475 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:09.170765   34475 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:09.171063   34475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:09.171096   34475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:09.187365   34475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34295
	I0429 19:07:09.187848   34475 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:09.188294   34475 main.go:141] libmachine: Using API Version  1
	I0429 19:07:09.188321   34475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:09.188643   34475 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:09.188832   34475 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:07:09.189031   34475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:09.189055   34475 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:07:09.191893   34475 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:09.192292   34475 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:09.192339   34475 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:09.192528   34475 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:07:09.192721   34475 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:07:09.192872   34475 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:07:09.192986   34475 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:07:09.281262   34475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:09.298759   34475 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:09.298796   34475 api_server.go:166] Checking apiserver status ...
	I0429 19:07:09.298840   34475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:09.315036   34475 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0429 19:07:09.326880   34475 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:09.326958   34475 ssh_runner.go:195] Run: ls
	I0429 19:07:09.332703   34475 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:09.339930   34475 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:09.339959   34475 status.go:422] ha-058855-m03 apiserver status = Running (err=<nil>)
	I0429 19:07:09.339971   34475 status.go:257] ha-058855-m03 status: &{Name:ha-058855-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:09.339989   34475 status.go:255] checking status of ha-058855-m04 ...
	I0429 19:07:09.340398   34475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:09.340441   34475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:09.356154   34475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0429 19:07:09.356594   34475 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:09.357059   34475 main.go:141] libmachine: Using API Version  1
	I0429 19:07:09.357093   34475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:09.357365   34475 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:09.357570   34475 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:07:09.359131   34475 status.go:330] ha-058855-m04 host status = "Running" (err=<nil>)
	I0429 19:07:09.359146   34475 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:09.359542   34475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:09.359592   34475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:09.375314   34475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45217
	I0429 19:07:09.375702   34475 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:09.376226   34475 main.go:141] libmachine: Using API Version  1
	I0429 19:07:09.376248   34475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:09.376611   34475 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:09.376791   34475 main.go:141] libmachine: (ha-058855-m04) Calling .GetIP
	I0429 19:07:09.379344   34475 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:09.379713   34475 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:09.379742   34475 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:09.379844   34475 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:09.380255   34475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:09.380300   34475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:09.394982   34475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37247
	I0429 19:07:09.395382   34475 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:09.395892   34475 main.go:141] libmachine: Using API Version  1
	I0429 19:07:09.395914   34475 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:09.396178   34475 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:09.396380   34475 main.go:141] libmachine: (ha-058855-m04) Calling .DriverName
	I0429 19:07:09.396544   34475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:09.396566   34475 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHHostname
	I0429 19:07:09.399305   34475 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:09.399666   34475 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:09.399692   34475 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:09.399846   34475 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHPort
	I0429 19:07:09.400024   34475 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHKeyPath
	I0429 19:07:09.400158   34475 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHUsername
	I0429 19:07:09.400343   34475 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m04/id_rsa Username:docker}
	I0429 19:07:09.482631   34475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:09.497902   34475 status.go:257] ha-058855-m04 status: &{Name:ha-058855-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr: exit status 3 (5.257967535s)

                                                
                                                
-- stdout --
	ha-058855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-058855-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:07:10.454336   34609 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:07:10.454563   34609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:10.454669   34609 out.go:304] Setting ErrFile to fd 2...
	I0429 19:07:10.454705   34609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:10.455055   34609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:07:10.455496   34609 out.go:298] Setting JSON to false
	I0429 19:07:10.455543   34609 mustload.go:65] Loading cluster: ha-058855
	I0429 19:07:10.455638   34609 notify.go:220] Checking for updates...
	I0429 19:07:10.456001   34609 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:07:10.456019   34609 status.go:255] checking status of ha-058855 ...
	I0429 19:07:10.456381   34609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:10.456429   34609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:10.472746   34609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0429 19:07:10.473256   34609 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:10.474016   34609 main.go:141] libmachine: Using API Version  1
	I0429 19:07:10.474038   34609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:10.474447   34609 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:10.474657   34609 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:07:10.476291   34609 status.go:330] ha-058855 host status = "Running" (err=<nil>)
	I0429 19:07:10.476308   34609 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:10.476736   34609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:10.476796   34609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:10.492909   34609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41185
	I0429 19:07:10.493350   34609 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:10.493852   34609 main.go:141] libmachine: Using API Version  1
	I0429 19:07:10.493871   34609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:10.494218   34609 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:10.494410   34609 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:07:10.497168   34609 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:10.497588   34609 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:10.497617   34609 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:10.497725   34609 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:10.497999   34609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:10.498040   34609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:10.513579   34609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38179
	I0429 19:07:10.514035   34609 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:10.514476   34609 main.go:141] libmachine: Using API Version  1
	I0429 19:07:10.514500   34609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:10.514852   34609 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:10.515039   34609 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:07:10.515218   34609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:10.515264   34609 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:07:10.517775   34609 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:10.518252   34609 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:10.518290   34609 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:10.518444   34609 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:07:10.518637   34609 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:07:10.518798   34609 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:07:10.518897   34609 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:07:10.611691   34609 ssh_runner.go:195] Run: systemctl --version
	I0429 19:07:10.619744   34609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:10.641173   34609 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:10.641206   34609 api_server.go:166] Checking apiserver status ...
	I0429 19:07:10.641246   34609 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:10.657366   34609 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0429 19:07:10.669995   34609 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:10.670047   34609 ssh_runner.go:195] Run: ls
	I0429 19:07:10.675600   34609 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:10.680856   34609 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:10.680886   34609 status.go:422] ha-058855 apiserver status = Running (err=<nil>)
	I0429 19:07:10.680899   34609 status.go:257] ha-058855 status: &{Name:ha-058855 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:10.680920   34609 status.go:255] checking status of ha-058855-m02 ...
	I0429 19:07:10.681237   34609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:10.681272   34609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:10.696641   34609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34109
	I0429 19:07:10.697089   34609 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:10.697506   34609 main.go:141] libmachine: Using API Version  1
	I0429 19:07:10.697528   34609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:10.697832   34609 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:10.698025   34609 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:07:10.699714   34609 status.go:330] ha-058855-m02 host status = "Running" (err=<nil>)
	I0429 19:07:10.699733   34609 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:07:10.700131   34609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:10.700202   34609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:10.715218   34609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I0429 19:07:10.715652   34609 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:10.716272   34609 main.go:141] libmachine: Using API Version  1
	I0429 19:07:10.716305   34609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:10.716585   34609 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:10.716808   34609 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:07:10.719826   34609 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:10.720233   34609 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:07:10.720260   34609 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:10.720432   34609 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:07:10.720768   34609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:10.720814   34609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:10.736598   34609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42669
	I0429 19:07:10.737076   34609 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:10.737639   34609 main.go:141] libmachine: Using API Version  1
	I0429 19:07:10.737675   34609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:10.738006   34609 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:10.738259   34609 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:07:10.738439   34609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:10.738460   34609 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:07:10.741193   34609 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:10.741665   34609 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:07:10.741690   34609 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:10.741838   34609 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:07:10.742043   34609 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:07:10.742246   34609 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:07:10.742428   34609 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	W0429 19:07:12.202406   34609 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:12.202485   34609 retry.go:31] will retry after 214.583911ms: dial tcp 192.168.39.27:22: connect: no route to host
	W0429 19:07:15.274368   34609 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0429 19:07:15.274460   34609 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0429 19:07:15.274488   34609 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:15.274504   34609 status.go:257] ha-058855-m02 status: &{Name:ha-058855-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 19:07:15.274524   34609 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:15.274541   34609 status.go:255] checking status of ha-058855-m03 ...
	I0429 19:07:15.274866   34609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:15.274925   34609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:15.290017   34609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0429 19:07:15.290500   34609 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:15.290996   34609 main.go:141] libmachine: Using API Version  1
	I0429 19:07:15.291033   34609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:15.291342   34609 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:15.291541   34609 main.go:141] libmachine: (ha-058855-m03) Calling .GetState
	I0429 19:07:15.293129   34609 status.go:330] ha-058855-m03 host status = "Running" (err=<nil>)
	I0429 19:07:15.293143   34609 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:15.293425   34609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:15.293477   34609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:15.308289   34609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I0429 19:07:15.308722   34609 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:15.309196   34609 main.go:141] libmachine: Using API Version  1
	I0429 19:07:15.309222   34609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:15.309544   34609 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:15.309734   34609 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:07:15.312514   34609 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:15.312940   34609 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:15.312968   34609 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:15.313079   34609 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:15.313501   34609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:15.313547   34609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:15.328790   34609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35453
	I0429 19:07:15.329253   34609 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:15.329789   34609 main.go:141] libmachine: Using API Version  1
	I0429 19:07:15.329810   34609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:15.330129   34609 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:15.330310   34609 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:07:15.330503   34609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:15.330522   34609 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:07:15.333393   34609 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:15.333901   34609 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:15.333932   34609 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:15.334121   34609 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:07:15.334291   34609 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:07:15.334449   34609 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:07:15.334564   34609 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:07:15.423400   34609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:15.444523   34609 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:15.444549   34609 api_server.go:166] Checking apiserver status ...
	I0429 19:07:15.444580   34609 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:15.461620   34609 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0429 19:07:15.472770   34609 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:15.472814   34609 ssh_runner.go:195] Run: ls
	I0429 19:07:15.478473   34609 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:15.483748   34609 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:15.483770   34609 status.go:422] ha-058855-m03 apiserver status = Running (err=<nil>)
	I0429 19:07:15.483781   34609 status.go:257] ha-058855-m03 status: &{Name:ha-058855-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:15.483800   34609 status.go:255] checking status of ha-058855-m04 ...
	I0429 19:07:15.484158   34609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:15.484205   34609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:15.502497   34609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35723
	I0429 19:07:15.502936   34609 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:15.503470   34609 main.go:141] libmachine: Using API Version  1
	I0429 19:07:15.503503   34609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:15.503834   34609 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:15.504004   34609 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:07:15.505451   34609 status.go:330] ha-058855-m04 host status = "Running" (err=<nil>)
	I0429 19:07:15.505469   34609 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:15.505778   34609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:15.505809   34609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:15.523322   34609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35121
	I0429 19:07:15.523797   34609 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:15.524287   34609 main.go:141] libmachine: Using API Version  1
	I0429 19:07:15.524309   34609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:15.524612   34609 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:15.524834   34609 main.go:141] libmachine: (ha-058855-m04) Calling .GetIP
	I0429 19:07:15.527852   34609 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:15.528332   34609 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:15.528501   34609 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:15.528636   34609 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:15.529039   34609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:15.529078   34609 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:15.545890   34609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
	I0429 19:07:15.546379   34609 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:15.546869   34609 main.go:141] libmachine: Using API Version  1
	I0429 19:07:15.546888   34609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:15.547199   34609 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:15.547375   34609 main.go:141] libmachine: (ha-058855-m04) Calling .DriverName
	I0429 19:07:15.547569   34609 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:15.547589   34609 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHHostname
	I0429 19:07:15.550324   34609 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:15.550732   34609 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:15.550764   34609 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:15.550863   34609 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHPort
	I0429 19:07:15.551028   34609 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHKeyPath
	I0429 19:07:15.551184   34609 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHUsername
	I0429 19:07:15.551491   34609 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m04/id_rsa Username:docker}
	I0429 19:07:15.639120   34609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:15.656204   34609 status.go:257] ha-058855-m04 status: &{Name:ha-058855-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr: exit status 3 (4.598427179s)

                                                
                                                
-- stdout --
	ha-058855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-058855-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:07:17.466894   34773 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:07:17.467135   34773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:17.467145   34773 out.go:304] Setting ErrFile to fd 2...
	I0429 19:07:17.467149   34773 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:17.467426   34773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:07:17.467676   34773 out.go:298] Setting JSON to false
	I0429 19:07:17.467705   34773 mustload.go:65] Loading cluster: ha-058855
	I0429 19:07:17.467766   34773 notify.go:220] Checking for updates...
	I0429 19:07:17.468134   34773 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:07:17.468152   34773 status.go:255] checking status of ha-058855 ...
	I0429 19:07:17.468583   34773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:17.468657   34773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:17.486437   34773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34391
	I0429 19:07:17.486883   34773 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:17.487471   34773 main.go:141] libmachine: Using API Version  1
	I0429 19:07:17.487496   34773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:17.487851   34773 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:17.488041   34773 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:07:17.489780   34773 status.go:330] ha-058855 host status = "Running" (err=<nil>)
	I0429 19:07:17.489797   34773 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:17.490281   34773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:17.490330   34773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:17.505448   34773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38589
	I0429 19:07:17.505885   34773 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:17.506359   34773 main.go:141] libmachine: Using API Version  1
	I0429 19:07:17.506381   34773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:17.506750   34773 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:17.506958   34773 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:07:17.509907   34773 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:17.510355   34773 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:17.510396   34773 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:17.510525   34773 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:17.510859   34773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:17.510903   34773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:17.525641   34773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37081
	I0429 19:07:17.526172   34773 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:17.526681   34773 main.go:141] libmachine: Using API Version  1
	I0429 19:07:17.526708   34773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:17.527001   34773 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:17.527186   34773 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:07:17.527357   34773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:17.527379   34773 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:07:17.530575   34773 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:17.531108   34773 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:17.531129   34773 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:17.531307   34773 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:07:17.531459   34773 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:07:17.531750   34773 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:07:17.531908   34773 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:07:17.622820   34773 ssh_runner.go:195] Run: systemctl --version
	I0429 19:07:17.629759   34773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:17.653080   34773 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:17.653111   34773 api_server.go:166] Checking apiserver status ...
	I0429 19:07:17.653163   34773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:17.670987   34773 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0429 19:07:17.684197   34773 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:17.684244   34773 ssh_runner.go:195] Run: ls
	I0429 19:07:17.689884   34773 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:17.696907   34773 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:17.696933   34773 status.go:422] ha-058855 apiserver status = Running (err=<nil>)
	I0429 19:07:17.696943   34773 status.go:257] ha-058855 status: &{Name:ha-058855 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:17.696972   34773 status.go:255] checking status of ha-058855-m02 ...
	I0429 19:07:17.697284   34773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:17.697325   34773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:17.714944   34773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I0429 19:07:17.715367   34773 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:17.715890   34773 main.go:141] libmachine: Using API Version  1
	I0429 19:07:17.715921   34773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:17.716223   34773 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:17.716426   34773 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:07:17.718025   34773 status.go:330] ha-058855-m02 host status = "Running" (err=<nil>)
	I0429 19:07:17.718044   34773 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:07:17.718437   34773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:17.718486   34773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:17.734702   34773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0429 19:07:17.735103   34773 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:17.735609   34773 main.go:141] libmachine: Using API Version  1
	I0429 19:07:17.735643   34773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:17.735936   34773 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:17.736159   34773 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:07:17.739264   34773 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:17.739719   34773 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:07:17.739748   34773 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:17.739865   34773 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:07:17.740172   34773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:17.740211   34773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:17.755507   34773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0429 19:07:17.755944   34773 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:17.756455   34773 main.go:141] libmachine: Using API Version  1
	I0429 19:07:17.756479   34773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:17.756815   34773 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:17.756988   34773 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:07:17.757183   34773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:17.757199   34773 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:07:17.760152   34773 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:17.760549   34773 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:07:17.760571   34773 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:17.760731   34773 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:07:17.760914   34773 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:07:17.761056   34773 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:07:17.761201   34773 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	W0429 19:07:18.346232   34773 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:18.346272   34773 retry.go:31] will retry after 218.239186ms: dial tcp 192.168.39.27:22: connect: no route to host
	W0429 19:07:21.642308   34773 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0429 19:07:21.642379   34773 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0429 19:07:21.642393   34773 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:21.642400   34773 status.go:257] ha-058855-m02 status: &{Name:ha-058855-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 19:07:21.642417   34773 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:21.642425   34773 status.go:255] checking status of ha-058855-m03 ...
	I0429 19:07:21.642767   34773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:21.642806   34773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:21.657791   34773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39049
	I0429 19:07:21.658254   34773 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:21.658785   34773 main.go:141] libmachine: Using API Version  1
	I0429 19:07:21.658811   34773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:21.659107   34773 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:21.659323   34773 main.go:141] libmachine: (ha-058855-m03) Calling .GetState
	I0429 19:07:21.661184   34773 status.go:330] ha-058855-m03 host status = "Running" (err=<nil>)
	I0429 19:07:21.661205   34773 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:21.661527   34773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:21.661587   34773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:21.676560   34773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40469
	I0429 19:07:21.676959   34773 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:21.677354   34773 main.go:141] libmachine: Using API Version  1
	I0429 19:07:21.677377   34773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:21.677610   34773 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:21.677738   34773 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:07:21.680461   34773 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:21.680891   34773 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:21.680914   34773 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:21.681042   34773 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:21.681336   34773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:21.681372   34773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:21.696245   34773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32903
	I0429 19:07:21.696668   34773 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:21.697181   34773 main.go:141] libmachine: Using API Version  1
	I0429 19:07:21.697204   34773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:21.697524   34773 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:21.697740   34773 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:07:21.697937   34773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:21.697957   34773 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:07:21.700710   34773 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:21.701122   34773 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:21.701161   34773 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:21.701324   34773 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:07:21.701500   34773 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:07:21.701640   34773 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:07:21.701784   34773 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:07:21.791253   34773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:21.808392   34773 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:21.808421   34773 api_server.go:166] Checking apiserver status ...
	I0429 19:07:21.808468   34773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:21.825001   34773 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0429 19:07:21.838775   34773 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:21.838841   34773 ssh_runner.go:195] Run: ls
	I0429 19:07:21.844033   34773 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:21.848481   34773 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:21.848510   34773 status.go:422] ha-058855-m03 apiserver status = Running (err=<nil>)
	I0429 19:07:21.848521   34773 status.go:257] ha-058855-m03 status: &{Name:ha-058855-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:21.848535   34773 status.go:255] checking status of ha-058855-m04 ...
	I0429 19:07:21.848875   34773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:21.848911   34773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:21.864109   34773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41787
	I0429 19:07:21.864527   34773 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:21.864972   34773 main.go:141] libmachine: Using API Version  1
	I0429 19:07:21.864996   34773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:21.865289   34773 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:21.865529   34773 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:07:21.867192   34773 status.go:330] ha-058855-m04 host status = "Running" (err=<nil>)
	I0429 19:07:21.867209   34773 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:21.867497   34773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:21.867535   34773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:21.881794   34773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
	I0429 19:07:21.882247   34773 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:21.882819   34773 main.go:141] libmachine: Using API Version  1
	I0429 19:07:21.882836   34773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:21.883126   34773 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:21.883297   34773 main.go:141] libmachine: (ha-058855-m04) Calling .GetIP
	I0429 19:07:21.885647   34773 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:21.886045   34773 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:21.886087   34773 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:21.886220   34773 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:21.886533   34773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:21.886580   34773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:21.900599   34773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45689
	I0429 19:07:21.900960   34773 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:21.901380   34773 main.go:141] libmachine: Using API Version  1
	I0429 19:07:21.901397   34773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:21.901692   34773 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:21.901883   34773 main.go:141] libmachine: (ha-058855-m04) Calling .DriverName
	I0429 19:07:21.902087   34773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:21.902111   34773 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHHostname
	I0429 19:07:21.905360   34773 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:21.905813   34773 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:21.905858   34773 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:21.906013   34773 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHPort
	I0429 19:07:21.906190   34773 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHKeyPath
	I0429 19:07:21.906388   34773 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHUsername
	I0429 19:07:21.906550   34773 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m04/id_rsa Username:docker}
	I0429 19:07:21.990798   34773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:22.007095   34773 status.go:257] ha-058855-m04 status: &{Name:ha-058855-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr: exit status 3 (3.778896405s)

                                                
                                                
-- stdout --
	ha-058855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-058855-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:07:25.099887   34964 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:07:25.100006   34964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:25.100018   34964 out.go:304] Setting ErrFile to fd 2...
	I0429 19:07:25.100022   34964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:25.100215   34964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:07:25.100437   34964 out.go:298] Setting JSON to false
	I0429 19:07:25.100461   34964 mustload.go:65] Loading cluster: ha-058855
	I0429 19:07:25.100581   34964 notify.go:220] Checking for updates...
	I0429 19:07:25.100957   34964 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:07:25.100973   34964 status.go:255] checking status of ha-058855 ...
	I0429 19:07:25.101403   34964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:25.101459   34964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:25.118462   34964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
	I0429 19:07:25.118875   34964 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:25.119523   34964 main.go:141] libmachine: Using API Version  1
	I0429 19:07:25.119557   34964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:25.119889   34964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:25.120061   34964 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:07:25.121638   34964 status.go:330] ha-058855 host status = "Running" (err=<nil>)
	I0429 19:07:25.121667   34964 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:25.122107   34964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:25.122154   34964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:25.137214   34964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46043
	I0429 19:07:25.137617   34964 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:25.138012   34964 main.go:141] libmachine: Using API Version  1
	I0429 19:07:25.138031   34964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:25.138334   34964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:25.138505   34964 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:07:25.141119   34964 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:25.141507   34964 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:25.141531   34964 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:25.141675   34964 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:25.141962   34964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:25.141992   34964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:25.156570   34964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42145
	I0429 19:07:25.157005   34964 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:25.157477   34964 main.go:141] libmachine: Using API Version  1
	I0429 19:07:25.157502   34964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:25.157848   34964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:25.158091   34964 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:07:25.158317   34964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:25.158354   34964 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:07:25.161060   34964 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:25.161435   34964 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:25.161464   34964 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:25.161589   34964 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:07:25.161771   34964 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:07:25.161971   34964 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:07:25.162157   34964 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:07:25.247405   34964 ssh_runner.go:195] Run: systemctl --version
	I0429 19:07:25.254995   34964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:25.272667   34964 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:25.272693   34964 api_server.go:166] Checking apiserver status ...
	I0429 19:07:25.272723   34964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:25.291532   34964 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0429 19:07:25.304482   34964 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:25.304550   34964 ssh_runner.go:195] Run: ls
	I0429 19:07:25.309945   34964 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:25.318509   34964 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:25.318541   34964 status.go:422] ha-058855 apiserver status = Running (err=<nil>)
	I0429 19:07:25.318555   34964 status.go:257] ha-058855 status: &{Name:ha-058855 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:25.318575   34964 status.go:255] checking status of ha-058855-m02 ...
	I0429 19:07:25.318978   34964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:25.319022   34964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:25.336573   34964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45529
	I0429 19:07:25.337014   34964 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:25.337542   34964 main.go:141] libmachine: Using API Version  1
	I0429 19:07:25.337568   34964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:25.337867   34964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:25.338043   34964 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:07:25.339538   34964 status.go:330] ha-058855-m02 host status = "Running" (err=<nil>)
	I0429 19:07:25.339552   34964 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:07:25.339822   34964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:25.339864   34964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:25.356273   34964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45077
	I0429 19:07:25.356639   34964 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:25.357068   34964 main.go:141] libmachine: Using API Version  1
	I0429 19:07:25.357100   34964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:25.357489   34964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:25.357689   34964 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:07:25.360615   34964 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:25.361117   34964 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:07:25.361146   34964 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:25.361339   34964 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:07:25.361709   34964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:25.361745   34964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:25.377058   34964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43123
	I0429 19:07:25.377475   34964 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:25.377910   34964 main.go:141] libmachine: Using API Version  1
	I0429 19:07:25.377935   34964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:25.378294   34964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:25.378484   34964 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:07:25.378641   34964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:25.378663   34964 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:07:25.381415   34964 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:25.381767   34964 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:07:25.381805   34964 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:25.381933   34964 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:07:25.382118   34964 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:07:25.382256   34964 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:07:25.382380   34964 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	W0429 19:07:28.458295   34964 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0429 19:07:28.458383   34964 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0429 19:07:28.458402   34964 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:28.458412   34964 status.go:257] ha-058855-m02 status: &{Name:ha-058855-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 19:07:28.458437   34964 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:28.458448   34964 status.go:255] checking status of ha-058855-m03 ...
	I0429 19:07:28.458776   34964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:28.458825   34964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:28.473563   34964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35699
	I0429 19:07:28.474098   34964 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:28.474691   34964 main.go:141] libmachine: Using API Version  1
	I0429 19:07:28.474715   34964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:28.475051   34964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:28.475293   34964 main.go:141] libmachine: (ha-058855-m03) Calling .GetState
	I0429 19:07:28.476876   34964 status.go:330] ha-058855-m03 host status = "Running" (err=<nil>)
	I0429 19:07:28.476894   34964 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:28.477173   34964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:28.477218   34964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:28.491451   34964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45471
	I0429 19:07:28.491890   34964 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:28.492335   34964 main.go:141] libmachine: Using API Version  1
	I0429 19:07:28.492358   34964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:28.492634   34964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:28.492840   34964 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:07:28.495680   34964 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:28.496141   34964 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:28.496172   34964 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:28.496261   34964 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:28.496663   34964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:28.496706   34964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:28.511453   34964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37019
	I0429 19:07:28.511929   34964 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:28.512415   34964 main.go:141] libmachine: Using API Version  1
	I0429 19:07:28.512439   34964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:28.512800   34964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:28.513016   34964 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:07:28.513240   34964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:28.513264   34964 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:07:28.516087   34964 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:28.516442   34964 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:28.516470   34964 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:28.516623   34964 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:07:28.516773   34964 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:07:28.516924   34964 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:07:28.517075   34964 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:07:28.602679   34964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:28.621600   34964 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:28.621627   34964 api_server.go:166] Checking apiserver status ...
	I0429 19:07:28.621660   34964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:28.639157   34964 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0429 19:07:28.650602   34964 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:28.650654   34964 ssh_runner.go:195] Run: ls
	I0429 19:07:28.655759   34964 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:28.661721   34964 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:28.661742   34964 status.go:422] ha-058855-m03 apiserver status = Running (err=<nil>)
	I0429 19:07:28.661750   34964 status.go:257] ha-058855-m03 status: &{Name:ha-058855-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:28.661774   34964 status.go:255] checking status of ha-058855-m04 ...
	I0429 19:07:28.662149   34964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:28.662184   34964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:28.676762   34964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43927
	I0429 19:07:28.677142   34964 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:28.677620   34964 main.go:141] libmachine: Using API Version  1
	I0429 19:07:28.677648   34964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:28.677974   34964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:28.678196   34964 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:07:28.679839   34964 status.go:330] ha-058855-m04 host status = "Running" (err=<nil>)
	I0429 19:07:28.679866   34964 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:28.680182   34964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:28.680229   34964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:28.696258   34964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44079
	I0429 19:07:28.696675   34964 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:28.697263   34964 main.go:141] libmachine: Using API Version  1
	I0429 19:07:28.697291   34964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:28.697601   34964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:28.697827   34964 main.go:141] libmachine: (ha-058855-m04) Calling .GetIP
	I0429 19:07:28.700498   34964 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:28.700864   34964 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:28.700895   34964 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:28.701041   34964 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:28.701432   34964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:28.701475   34964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:28.717358   34964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35185
	I0429 19:07:28.717866   34964 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:28.718424   34964 main.go:141] libmachine: Using API Version  1
	I0429 19:07:28.718450   34964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:28.718728   34964 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:28.718914   34964 main.go:141] libmachine: (ha-058855-m04) Calling .DriverName
	I0429 19:07:28.719131   34964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:28.719158   34964 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHHostname
	I0429 19:07:28.721909   34964 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:28.722401   34964 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:28.722428   34964 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:28.722553   34964 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHPort
	I0429 19:07:28.722711   34964 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHKeyPath
	I0429 19:07:28.722860   34964 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHUsername
	I0429 19:07:28.722999   34964 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m04/id_rsa Username:docker}
	I0429 19:07:28.806448   34964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:28.824001   34964 status.go:257] ha-058855-m04 status: &{Name:ha-058855-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr: exit status 3 (3.771474595s)

                                                
                                                
-- stdout --
	ha-058855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-058855-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:07:31.618641   35070 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:07:31.618910   35070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:31.618921   35070 out.go:304] Setting ErrFile to fd 2...
	I0429 19:07:31.618926   35070 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:31.619127   35070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:07:31.619300   35070 out.go:298] Setting JSON to false
	I0429 19:07:31.619323   35070 mustload.go:65] Loading cluster: ha-058855
	I0429 19:07:31.619393   35070 notify.go:220] Checking for updates...
	I0429 19:07:31.619859   35070 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:07:31.619884   35070 status.go:255] checking status of ha-058855 ...
	I0429 19:07:31.620374   35070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:31.620421   35070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:31.640738   35070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35183
	I0429 19:07:31.641226   35070 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:31.641829   35070 main.go:141] libmachine: Using API Version  1
	I0429 19:07:31.641850   35070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:31.642279   35070 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:31.642488   35070 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:07:31.644144   35070 status.go:330] ha-058855 host status = "Running" (err=<nil>)
	I0429 19:07:31.644158   35070 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:31.644440   35070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:31.644479   35070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:31.661133   35070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35075
	I0429 19:07:31.661602   35070 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:31.662160   35070 main.go:141] libmachine: Using API Version  1
	I0429 19:07:31.662184   35070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:31.662481   35070 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:31.662645   35070 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:07:31.665259   35070 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:31.665695   35070 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:31.665720   35070 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:31.665903   35070 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:31.666239   35070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:31.666296   35070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:31.682588   35070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41191
	I0429 19:07:31.682971   35070 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:31.683352   35070 main.go:141] libmachine: Using API Version  1
	I0429 19:07:31.683374   35070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:31.683640   35070 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:31.683812   35070 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:07:31.684007   35070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:31.684048   35070 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:07:31.686793   35070 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:31.687182   35070 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:31.687226   35070 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:31.687361   35070 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:07:31.687513   35070 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:07:31.687662   35070 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:07:31.687803   35070 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:07:31.774662   35070 ssh_runner.go:195] Run: systemctl --version
	I0429 19:07:31.781740   35070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:31.799120   35070 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:31.799147   35070 api_server.go:166] Checking apiserver status ...
	I0429 19:07:31.799182   35070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:31.816676   35070 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0429 19:07:31.829536   35070 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:31.829592   35070 ssh_runner.go:195] Run: ls
	I0429 19:07:31.839062   35070 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:31.843671   35070 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:31.843697   35070 status.go:422] ha-058855 apiserver status = Running (err=<nil>)
	I0429 19:07:31.843710   35070 status.go:257] ha-058855 status: &{Name:ha-058855 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:31.843728   35070 status.go:255] checking status of ha-058855-m02 ...
	I0429 19:07:31.844104   35070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:31.844147   35070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:31.859422   35070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33247
	I0429 19:07:31.859856   35070 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:31.860363   35070 main.go:141] libmachine: Using API Version  1
	I0429 19:07:31.860398   35070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:31.860728   35070 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:31.860975   35070 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:07:31.862655   35070 status.go:330] ha-058855-m02 host status = "Running" (err=<nil>)
	I0429 19:07:31.862672   35070 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:07:31.863098   35070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:31.863143   35070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:31.878165   35070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43023
	I0429 19:07:31.878557   35070 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:31.879026   35070 main.go:141] libmachine: Using API Version  1
	I0429 19:07:31.879048   35070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:31.879383   35070 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:31.879568   35070 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:07:31.882603   35070 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:31.883055   35070 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:07:31.883090   35070 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:31.883215   35070 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:07:31.883482   35070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:31.883517   35070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:31.898483   35070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35837
	I0429 19:07:31.898907   35070 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:31.899333   35070 main.go:141] libmachine: Using API Version  1
	I0429 19:07:31.899381   35070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:31.899799   35070 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:31.899990   35070 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:07:31.900206   35070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:31.900232   35070 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:07:31.903180   35070 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:31.903672   35070 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:07:31.903712   35070 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:31.903821   35070 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:07:31.903991   35070 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:07:31.904145   35070 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:07:31.904312   35070 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	W0429 19:07:34.954357   35070 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0429 19:07:34.954455   35070 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0429 19:07:34.954474   35070 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:34.954483   35070 status.go:257] ha-058855-m02 status: &{Name:ha-058855-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 19:07:34.954500   35070 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:34.954508   35070 status.go:255] checking status of ha-058855-m03 ...
	I0429 19:07:34.954884   35070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:34.954933   35070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:34.970773   35070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I0429 19:07:34.971200   35070 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:34.971662   35070 main.go:141] libmachine: Using API Version  1
	I0429 19:07:34.971684   35070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:34.971976   35070 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:34.972179   35070 main.go:141] libmachine: (ha-058855-m03) Calling .GetState
	I0429 19:07:34.973875   35070 status.go:330] ha-058855-m03 host status = "Running" (err=<nil>)
	I0429 19:07:34.973891   35070 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:34.974326   35070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:34.974377   35070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:34.989933   35070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38657
	I0429 19:07:34.990407   35070 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:34.990946   35070 main.go:141] libmachine: Using API Version  1
	I0429 19:07:34.990969   35070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:34.991289   35070 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:34.991487   35070 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:07:34.994041   35070 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:34.994442   35070 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:34.994476   35070 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:34.994589   35070 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:34.994920   35070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:34.994957   35070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:35.010507   35070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0429 19:07:35.010987   35070 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:35.011501   35070 main.go:141] libmachine: Using API Version  1
	I0429 19:07:35.011522   35070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:35.011883   35070 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:35.012071   35070 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:07:35.012257   35070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:35.012284   35070 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:07:35.015016   35070 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:35.015309   35070 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:35.015338   35070 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:35.015456   35070 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:07:35.015611   35070 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:07:35.015749   35070 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:07:35.015870   35070 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:07:35.103210   35070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:35.120043   35070 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:35.120069   35070 api_server.go:166] Checking apiserver status ...
	I0429 19:07:35.120110   35070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:35.141899   35070 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0429 19:07:35.152928   35070 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:35.152999   35070 ssh_runner.go:195] Run: ls
	I0429 19:07:35.158798   35070 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:35.163937   35070 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:35.163961   35070 status.go:422] ha-058855-m03 apiserver status = Running (err=<nil>)
	I0429 19:07:35.163969   35070 status.go:257] ha-058855-m03 status: &{Name:ha-058855-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:35.163986   35070 status.go:255] checking status of ha-058855-m04 ...
	I0429 19:07:35.164325   35070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:35.164361   35070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:35.180304   35070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46753
	I0429 19:07:35.180738   35070 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:35.181187   35070 main.go:141] libmachine: Using API Version  1
	I0429 19:07:35.181211   35070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:35.181561   35070 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:35.181751   35070 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:07:35.183414   35070 status.go:330] ha-058855-m04 host status = "Running" (err=<nil>)
	I0429 19:07:35.183431   35070 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:35.183763   35070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:35.183803   35070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:35.198877   35070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0429 19:07:35.199339   35070 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:35.199790   35070 main.go:141] libmachine: Using API Version  1
	I0429 19:07:35.199814   35070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:35.200097   35070 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:35.200269   35070 main.go:141] libmachine: (ha-058855-m04) Calling .GetIP
	I0429 19:07:35.203126   35070 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:35.203516   35070 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:35.203537   35070 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:35.203699   35070 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:35.203973   35070 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:35.204009   35070 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:35.219864   35070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37143
	I0429 19:07:35.220372   35070 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:35.220959   35070 main.go:141] libmachine: Using API Version  1
	I0429 19:07:35.220993   35070 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:35.221353   35070 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:35.221538   35070 main.go:141] libmachine: (ha-058855-m04) Calling .DriverName
	I0429 19:07:35.221718   35070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:35.221744   35070 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHHostname
	I0429 19:07:35.224290   35070 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:35.224746   35070 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:35.224774   35070 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:35.224929   35070 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHPort
	I0429 19:07:35.225088   35070 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHKeyPath
	I0429 19:07:35.225185   35070 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHUsername
	I0429 19:07:35.225304   35070 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m04/id_rsa Username:docker}
	I0429 19:07:35.315070   35070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:35.331714   35070 status.go:257] ha-058855-m04 status: &{Name:ha-058855-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr: exit status 3 (3.793128766s)

                                                
                                                
-- stdout --
	ha-058855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-058855-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:07:41.702705   35267 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:07:41.702822   35267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:41.702830   35267 out.go:304] Setting ErrFile to fd 2...
	I0429 19:07:41.702835   35267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:41.703038   35267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:07:41.703205   35267 out.go:298] Setting JSON to false
	I0429 19:07:41.703229   35267 mustload.go:65] Loading cluster: ha-058855
	I0429 19:07:41.703301   35267 notify.go:220] Checking for updates...
	I0429 19:07:41.703592   35267 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:07:41.703606   35267 status.go:255] checking status of ha-058855 ...
	I0429 19:07:41.703975   35267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:41.704032   35267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:41.726615   35267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I0429 19:07:41.727162   35267 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:41.727757   35267 main.go:141] libmachine: Using API Version  1
	I0429 19:07:41.727808   35267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:41.728255   35267 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:41.728463   35267 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:07:41.730024   35267 status.go:330] ha-058855 host status = "Running" (err=<nil>)
	I0429 19:07:41.730042   35267 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:41.730340   35267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:41.730385   35267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:41.745548   35267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0429 19:07:41.745916   35267 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:41.746431   35267 main.go:141] libmachine: Using API Version  1
	I0429 19:07:41.746457   35267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:41.746745   35267 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:41.746935   35267 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:07:41.749220   35267 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:41.749639   35267 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:41.749672   35267 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:41.749768   35267 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:41.750158   35267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:41.750203   35267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:41.766682   35267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42499
	I0429 19:07:41.767087   35267 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:41.767567   35267 main.go:141] libmachine: Using API Version  1
	I0429 19:07:41.767588   35267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:41.767894   35267 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:41.768125   35267 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:07:41.768294   35267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:41.768334   35267 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:07:41.771156   35267 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:41.771628   35267 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:41.771661   35267 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:41.771799   35267 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:07:41.771985   35267 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:07:41.772169   35267 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:07:41.772328   35267 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:07:41.858677   35267 ssh_runner.go:195] Run: systemctl --version
	I0429 19:07:41.865841   35267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:41.885469   35267 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:41.885502   35267 api_server.go:166] Checking apiserver status ...
	I0429 19:07:41.885540   35267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:41.903425   35267 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0429 19:07:41.920623   35267 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:41.920695   35267 ssh_runner.go:195] Run: ls
	I0429 19:07:41.926229   35267 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:41.933637   35267 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:41.933666   35267 status.go:422] ha-058855 apiserver status = Running (err=<nil>)
	I0429 19:07:41.933678   35267 status.go:257] ha-058855 status: &{Name:ha-058855 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:41.933711   35267 status.go:255] checking status of ha-058855-m02 ...
	I0429 19:07:41.934146   35267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:41.934194   35267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:41.949791   35267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45025
	I0429 19:07:41.950292   35267 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:41.951183   35267 main.go:141] libmachine: Using API Version  1
	I0429 19:07:41.951205   35267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:41.952537   35267 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:41.952739   35267 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:07:41.954310   35267 status.go:330] ha-058855-m02 host status = "Running" (err=<nil>)
	I0429 19:07:41.954325   35267 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:07:41.954578   35267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:41.954610   35267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:41.971524   35267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0429 19:07:41.971919   35267 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:41.972395   35267 main.go:141] libmachine: Using API Version  1
	I0429 19:07:41.972424   35267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:41.972734   35267 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:41.972914   35267 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:07:41.976130   35267 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:41.976607   35267 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:07:41.976626   35267 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:41.976832   35267 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:07:41.977227   35267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:41.977281   35267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:41.994462   35267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45831
	I0429 19:07:41.994815   35267 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:41.995339   35267 main.go:141] libmachine: Using API Version  1
	I0429 19:07:41.995362   35267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:41.995650   35267 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:41.995896   35267 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:07:41.996112   35267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:41.996132   35267 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:07:41.999127   35267 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:41.999624   35267 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:07:41.999651   35267 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:07:41.999802   35267 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:07:41.999978   35267 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:07:42.000126   35267 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:07:42.000266   35267 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	W0429 19:07:45.066277   35267 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0429 19:07:45.066378   35267 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0429 19:07:45.066405   35267 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:45.066416   35267 status.go:257] ha-058855-m02 status: &{Name:ha-058855-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 19:07:45.066440   35267 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0429 19:07:45.066454   35267 status.go:255] checking status of ha-058855-m03 ...
	I0429 19:07:45.066761   35267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:45.066812   35267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:45.081106   35267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39147
	I0429 19:07:45.081565   35267 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:45.082031   35267 main.go:141] libmachine: Using API Version  1
	I0429 19:07:45.082055   35267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:45.082442   35267 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:45.082673   35267 main.go:141] libmachine: (ha-058855-m03) Calling .GetState
	I0429 19:07:45.084608   35267 status.go:330] ha-058855-m03 host status = "Running" (err=<nil>)
	I0429 19:07:45.084622   35267 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:45.084892   35267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:45.084927   35267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:45.101344   35267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0429 19:07:45.101790   35267 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:45.102312   35267 main.go:141] libmachine: Using API Version  1
	I0429 19:07:45.102334   35267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:45.102674   35267 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:45.102855   35267 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:07:45.106222   35267 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:45.106688   35267 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:45.106723   35267 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:45.106855   35267 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:45.107148   35267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:45.107192   35267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:45.122749   35267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I0429 19:07:45.123193   35267 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:45.123754   35267 main.go:141] libmachine: Using API Version  1
	I0429 19:07:45.123779   35267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:45.124100   35267 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:45.124294   35267 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:07:45.124469   35267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:45.124490   35267 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:07:45.127553   35267 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:45.128069   35267 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:45.128099   35267 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:45.128312   35267 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:07:45.128534   35267 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:07:45.128681   35267 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:07:45.128838   35267 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:07:45.220261   35267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:45.239480   35267 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:45.239508   35267 api_server.go:166] Checking apiserver status ...
	I0429 19:07:45.239542   35267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:45.256236   35267 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0429 19:07:45.266814   35267 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:45.266878   35267 ssh_runner.go:195] Run: ls
	I0429 19:07:45.272193   35267 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:45.276456   35267 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:45.276482   35267 status.go:422] ha-058855-m03 apiserver status = Running (err=<nil>)
	I0429 19:07:45.276508   35267 status.go:257] ha-058855-m03 status: &{Name:ha-058855-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:45.276527   35267 status.go:255] checking status of ha-058855-m04 ...
	I0429 19:07:45.276932   35267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:45.276969   35267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:45.292930   35267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43219
	I0429 19:07:45.293486   35267 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:45.293986   35267 main.go:141] libmachine: Using API Version  1
	I0429 19:07:45.294010   35267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:45.294435   35267 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:45.294655   35267 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:07:45.296280   35267 status.go:330] ha-058855-m04 host status = "Running" (err=<nil>)
	I0429 19:07:45.296297   35267 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:45.296656   35267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:45.296709   35267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:45.312272   35267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42075
	I0429 19:07:45.312739   35267 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:45.313282   35267 main.go:141] libmachine: Using API Version  1
	I0429 19:07:45.313302   35267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:45.313604   35267 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:45.313817   35267 main.go:141] libmachine: (ha-058855-m04) Calling .GetIP
	I0429 19:07:45.316775   35267 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:45.317249   35267 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:45.317277   35267 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:45.317471   35267 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:45.317811   35267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:45.317856   35267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:45.333935   35267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37627
	I0429 19:07:45.334416   35267 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:45.334972   35267 main.go:141] libmachine: Using API Version  1
	I0429 19:07:45.334998   35267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:45.335333   35267 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:45.335539   35267 main.go:141] libmachine: (ha-058855-m04) Calling .DriverName
	I0429 19:07:45.335776   35267 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:45.335802   35267 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHHostname
	I0429 19:07:45.338784   35267 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:45.339232   35267 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:45.339269   35267 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:45.339445   35267 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHPort
	I0429 19:07:45.339613   35267 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHKeyPath
	I0429 19:07:45.339753   35267 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHUsername
	I0429 19:07:45.339878   35267 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m04/id_rsa Username:docker}
	I0429 19:07:45.422452   35267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:45.437046   35267 status.go:257] ha-058855-m04 status: &{Name:ha-058855-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0429 19:07:48.916089   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr: exit status 7 (647.60634ms)

                                                
                                                
-- stdout --
	ha-058855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-058855-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:07:52.495007   35498 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:07:52.496323   35498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:52.496337   35498 out.go:304] Setting ErrFile to fd 2...
	I0429 19:07:52.496342   35498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:07:52.496639   35498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:07:52.496835   35498 out.go:298] Setting JSON to false
	I0429 19:07:52.496862   35498 mustload.go:65] Loading cluster: ha-058855
	I0429 19:07:52.497038   35498 notify.go:220] Checking for updates...
	I0429 19:07:52.497817   35498 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:07:52.497842   35498 status.go:255] checking status of ha-058855 ...
	I0429 19:07:52.498326   35498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:52.498374   35498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:52.514007   35498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37203
	I0429 19:07:52.514474   35498 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:52.515061   35498 main.go:141] libmachine: Using API Version  1
	I0429 19:07:52.515094   35498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:52.515388   35498 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:52.515597   35498 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:07:52.517308   35498 status.go:330] ha-058855 host status = "Running" (err=<nil>)
	I0429 19:07:52.517324   35498 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:52.517740   35498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:52.517802   35498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:52.532772   35498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37125
	I0429 19:07:52.533098   35498 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:52.533559   35498 main.go:141] libmachine: Using API Version  1
	I0429 19:07:52.533579   35498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:52.533852   35498 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:52.534076   35498 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:07:52.536877   35498 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:52.537254   35498 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:52.537290   35498 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:52.537491   35498 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:07:52.537762   35498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:52.537810   35498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:52.552211   35498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0429 19:07:52.552735   35498 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:52.553274   35498 main.go:141] libmachine: Using API Version  1
	I0429 19:07:52.553302   35498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:52.553601   35498 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:52.553816   35498 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:07:52.554042   35498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:52.554081   35498 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:07:52.556945   35498 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:52.557385   35498 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:07:52.557420   35498 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:07:52.557578   35498 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:07:52.557737   35498 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:07:52.557895   35498 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:07:52.558013   35498 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:07:52.646567   35498 ssh_runner.go:195] Run: systemctl --version
	I0429 19:07:52.653231   35498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:52.671537   35498 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:52.671565   35498 api_server.go:166] Checking apiserver status ...
	I0429 19:07:52.671598   35498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:52.686563   35498 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0429 19:07:52.697403   35498 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:52.697455   35498 ssh_runner.go:195] Run: ls
	I0429 19:07:52.702346   35498 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:52.706813   35498 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:52.706831   35498 status.go:422] ha-058855 apiserver status = Running (err=<nil>)
	I0429 19:07:52.706840   35498 status.go:257] ha-058855 status: &{Name:ha-058855 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:52.706854   35498 status.go:255] checking status of ha-058855-m02 ...
	I0429 19:07:52.707133   35498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:52.707165   35498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:52.721531   35498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33455
	I0429 19:07:52.721964   35498 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:52.722486   35498 main.go:141] libmachine: Using API Version  1
	I0429 19:07:52.722507   35498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:52.722825   35498 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:52.723015   35498 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:07:52.724403   35498 status.go:330] ha-058855-m02 host status = "Stopped" (err=<nil>)
	I0429 19:07:52.724415   35498 status.go:343] host is not running, skipping remaining checks
	I0429 19:07:52.724421   35498 status.go:257] ha-058855-m02 status: &{Name:ha-058855-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:52.724439   35498 status.go:255] checking status of ha-058855-m03 ...
	I0429 19:07:52.724701   35498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:52.724748   35498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:52.739019   35498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36259
	I0429 19:07:52.739450   35498 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:52.739948   35498 main.go:141] libmachine: Using API Version  1
	I0429 19:07:52.739975   35498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:52.740302   35498 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:52.740492   35498 main.go:141] libmachine: (ha-058855-m03) Calling .GetState
	I0429 19:07:52.742056   35498 status.go:330] ha-058855-m03 host status = "Running" (err=<nil>)
	I0429 19:07:52.742087   35498 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:52.742355   35498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:52.742387   35498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:52.756942   35498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41651
	I0429 19:07:52.757383   35498 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:52.757821   35498 main.go:141] libmachine: Using API Version  1
	I0429 19:07:52.757841   35498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:52.758230   35498 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:52.758435   35498 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:07:52.761292   35498 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:52.761684   35498 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:52.761712   35498 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:52.762005   35498 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:07:52.762379   35498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:52.762419   35498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:52.778317   35498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0429 19:07:52.778730   35498 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:52.779202   35498 main.go:141] libmachine: Using API Version  1
	I0429 19:07:52.779233   35498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:52.779579   35498 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:52.779768   35498 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:07:52.779983   35498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:52.780002   35498 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:07:52.783171   35498 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:52.783604   35498 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:07:52.783629   35498 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:07:52.783767   35498 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:07:52.783963   35498 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:07:52.784128   35498 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:07:52.784261   35498 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:07:52.870393   35498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:52.888600   35498 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:07:52.888625   35498 api_server.go:166] Checking apiserver status ...
	I0429 19:07:52.888653   35498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:07:52.904076   35498 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0429 19:07:52.917230   35498 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:07:52.917271   35498 ssh_runner.go:195] Run: ls
	I0429 19:07:52.923616   35498 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:07:52.928449   35498 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:07:52.928476   35498 status.go:422] ha-058855-m03 apiserver status = Running (err=<nil>)
	I0429 19:07:52.928485   35498 status.go:257] ha-058855-m03 status: &{Name:ha-058855-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:07:52.928499   35498 status.go:255] checking status of ha-058855-m04 ...
	I0429 19:07:52.928870   35498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:52.928905   35498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:52.944087   35498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42911
	I0429 19:07:52.944515   35498 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:52.944980   35498 main.go:141] libmachine: Using API Version  1
	I0429 19:07:52.945002   35498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:52.945302   35498 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:52.945492   35498 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:07:52.946960   35498 status.go:330] ha-058855-m04 host status = "Running" (err=<nil>)
	I0429 19:07:52.946975   35498 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:52.947333   35498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:52.947372   35498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:52.962328   35498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33089
	I0429 19:07:52.962786   35498 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:52.963262   35498 main.go:141] libmachine: Using API Version  1
	I0429 19:07:52.963299   35498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:52.963650   35498 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:52.963852   35498 main.go:141] libmachine: (ha-058855-m04) Calling .GetIP
	I0429 19:07:52.966385   35498 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:52.966847   35498 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:52.966874   35498 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:52.967019   35498 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:07:52.967332   35498 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:07:52.967366   35498 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:07:52.981265   35498 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34571
	I0429 19:07:52.981655   35498 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:07:52.982115   35498 main.go:141] libmachine: Using API Version  1
	I0429 19:07:52.982138   35498 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:07:52.982443   35498 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:07:52.982632   35498 main.go:141] libmachine: (ha-058855-m04) Calling .DriverName
	I0429 19:07:52.982811   35498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:07:52.982831   35498 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHHostname
	I0429 19:07:52.985394   35498 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:52.985820   35498 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:07:52.985844   35498 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:07:52.985973   35498 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHPort
	I0429 19:07:52.986134   35498 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHKeyPath
	I0429 19:07:52.986255   35498 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHUsername
	I0429 19:07:52.986386   35498 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m04/id_rsa Username:docker}
	I0429 19:07:53.070620   35498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:07:53.087478   35498 status.go:257] ha-058855-m04 status: &{Name:ha-058855-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr: exit status 7 (661.468779ms)

                                                
                                                
-- stdout --
	ha-058855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-058855-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:08:05.560340   35682 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:08:05.561386   35682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:08:05.561402   35682 out.go:304] Setting ErrFile to fd 2...
	I0429 19:08:05.561410   35682 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:08:05.562151   35682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:08:05.562435   35682 out.go:298] Setting JSON to false
	I0429 19:08:05.562463   35682 mustload.go:65] Loading cluster: ha-058855
	I0429 19:08:05.562509   35682 notify.go:220] Checking for updates...
	I0429 19:08:05.563002   35682 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:08:05.563025   35682 status.go:255] checking status of ha-058855 ...
	I0429 19:08:05.563508   35682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:08:05.563558   35682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:08:05.581475   35682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0429 19:08:05.581955   35682 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:08:05.582589   35682 main.go:141] libmachine: Using API Version  1
	I0429 19:08:05.582630   35682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:08:05.583060   35682 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:08:05.583271   35682 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:08:05.584825   35682 status.go:330] ha-058855 host status = "Running" (err=<nil>)
	I0429 19:08:05.584839   35682 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:08:05.585143   35682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:08:05.585179   35682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:08:05.601662   35682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35805
	I0429 19:08:05.602121   35682 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:08:05.602652   35682 main.go:141] libmachine: Using API Version  1
	I0429 19:08:05.602689   35682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:08:05.602990   35682 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:08:05.603154   35682 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:08:05.606161   35682 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:08:05.606582   35682 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:08:05.606612   35682 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:08:05.606756   35682 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:08:05.607153   35682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:08:05.607204   35682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:08:05.622959   35682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34595
	I0429 19:08:05.623405   35682 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:08:05.623871   35682 main.go:141] libmachine: Using API Version  1
	I0429 19:08:05.623892   35682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:08:05.624158   35682 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:08:05.624338   35682 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:08:05.624517   35682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:08:05.624545   35682 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:08:05.627120   35682 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:08:05.627576   35682 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:08:05.627611   35682 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:08:05.627796   35682 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:08:05.627972   35682 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:08:05.628163   35682 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:08:05.628353   35682 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:08:05.714963   35682 ssh_runner.go:195] Run: systemctl --version
	I0429 19:08:05.722272   35682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:08:05.738845   35682 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:08:05.738891   35682 api_server.go:166] Checking apiserver status ...
	I0429 19:08:05.738938   35682 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:08:05.754828   35682 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup
	W0429 19:08:05.765289   35682 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:08:05.765346   35682 ssh_runner.go:195] Run: ls
	I0429 19:08:05.771065   35682 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:08:05.778772   35682 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:08:05.778805   35682 status.go:422] ha-058855 apiserver status = Running (err=<nil>)
	I0429 19:08:05.778819   35682 status.go:257] ha-058855 status: &{Name:ha-058855 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:08:05.778851   35682 status.go:255] checking status of ha-058855-m02 ...
	I0429 19:08:05.779168   35682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:08:05.779209   35682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:08:05.795785   35682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46277
	I0429 19:08:05.796324   35682 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:08:05.796866   35682 main.go:141] libmachine: Using API Version  1
	I0429 19:08:05.796894   35682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:08:05.797190   35682 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:08:05.797358   35682 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:08:05.799008   35682 status.go:330] ha-058855-m02 host status = "Stopped" (err=<nil>)
	I0429 19:08:05.799035   35682 status.go:343] host is not running, skipping remaining checks
	I0429 19:08:05.799043   35682 status.go:257] ha-058855-m02 status: &{Name:ha-058855-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:08:05.799091   35682 status.go:255] checking status of ha-058855-m03 ...
	I0429 19:08:05.799479   35682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:08:05.799539   35682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:08:05.814527   35682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39207
	I0429 19:08:05.814920   35682 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:08:05.815438   35682 main.go:141] libmachine: Using API Version  1
	I0429 19:08:05.815470   35682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:08:05.815823   35682 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:08:05.816032   35682 main.go:141] libmachine: (ha-058855-m03) Calling .GetState
	I0429 19:08:05.817406   35682 status.go:330] ha-058855-m03 host status = "Running" (err=<nil>)
	I0429 19:08:05.817421   35682 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:08:05.817766   35682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:08:05.817803   35682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:08:05.832335   35682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0429 19:08:05.832702   35682 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:08:05.833201   35682 main.go:141] libmachine: Using API Version  1
	I0429 19:08:05.833216   35682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:08:05.833592   35682 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:08:05.833848   35682 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:08:05.837024   35682 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:08:05.837506   35682 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:08:05.837531   35682 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:08:05.837646   35682 host.go:66] Checking if "ha-058855-m03" exists ...
	I0429 19:08:05.837938   35682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:08:05.837979   35682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:08:05.852983   35682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36581
	I0429 19:08:05.853467   35682 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:08:05.853924   35682 main.go:141] libmachine: Using API Version  1
	I0429 19:08:05.853945   35682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:08:05.854269   35682 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:08:05.854451   35682 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:08:05.854628   35682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:08:05.854651   35682 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:08:05.857311   35682 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:08:05.857761   35682 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:08:05.857791   35682 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:08:05.857881   35682 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:08:05.858082   35682 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:08:05.858210   35682 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:08:05.858491   35682 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:08:05.943636   35682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:08:05.961120   35682 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:08:05.961150   35682 api_server.go:166] Checking apiserver status ...
	I0429 19:08:05.961182   35682 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:08:05.978611   35682 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup
	W0429 19:08:05.993066   35682 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1578/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:08:05.993132   35682 ssh_runner.go:195] Run: ls
	I0429 19:08:05.999760   35682 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:08:06.004449   35682 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:08:06.004473   35682 status.go:422] ha-058855-m03 apiserver status = Running (err=<nil>)
	I0429 19:08:06.004482   35682 status.go:257] ha-058855-m03 status: &{Name:ha-058855-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:08:06.004498   35682 status.go:255] checking status of ha-058855-m04 ...
	I0429 19:08:06.004878   35682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:08:06.004915   35682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:08:06.020017   35682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0429 19:08:06.020412   35682 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:08:06.020888   35682 main.go:141] libmachine: Using API Version  1
	I0429 19:08:06.020924   35682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:08:06.021226   35682 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:08:06.021439   35682 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:08:06.022929   35682 status.go:330] ha-058855-m04 host status = "Running" (err=<nil>)
	I0429 19:08:06.022954   35682 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:08:06.023268   35682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:08:06.023309   35682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:08:06.039387   35682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0429 19:08:06.039780   35682 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:08:06.040286   35682 main.go:141] libmachine: Using API Version  1
	I0429 19:08:06.040308   35682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:08:06.040608   35682 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:08:06.040857   35682 main.go:141] libmachine: (ha-058855-m04) Calling .GetIP
	I0429 19:08:06.043743   35682 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:08:06.044135   35682 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:08:06.044154   35682 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:08:06.044332   35682 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:08:06.044732   35682 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:08:06.044792   35682 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:08:06.059718   35682 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I0429 19:08:06.060125   35682 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:08:06.060594   35682 main.go:141] libmachine: Using API Version  1
	I0429 19:08:06.060619   35682 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:08:06.060899   35682 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:08:06.061065   35682 main.go:141] libmachine: (ha-058855-m04) Calling .DriverName
	I0429 19:08:06.061217   35682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:08:06.061248   35682 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHHostname
	I0429 19:08:06.064266   35682 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:08:06.064746   35682 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:08:06.064784   35682 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:08:06.064985   35682 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHPort
	I0429 19:08:06.065191   35682 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHKeyPath
	I0429 19:08:06.065345   35682 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHUsername
	I0429 19:08:06.065482   35682 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m04/id_rsa Username:docker}
	I0429 19:08:06.150590   35682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:08:06.166753   35682 status.go:257] ha-058855-m04 status: &{Name:ha-058855-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-058855 -n ha-058855
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-058855 logs -n 25: (1.617598262s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855:/home/docker/cp-test_ha-058855-m03_ha-058855.txt                       |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855 sudo cat                                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m03_ha-058855.txt                                 |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m02:/home/docker/cp-test_ha-058855-m03_ha-058855-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m02 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m03_ha-058855-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04:/home/docker/cp-test_ha-058855-m03_ha-058855-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m04 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m03_ha-058855-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-058855 cp testdata/cp-test.txt                                                | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1826286980/001/cp-test_ha-058855-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855:/home/docker/cp-test_ha-058855-m04_ha-058855.txt                       |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855 sudo cat                                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m04_ha-058855.txt                                 |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m02:/home/docker/cp-test_ha-058855-m04_ha-058855-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m02 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m04_ha-058855-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03:/home/docker/cp-test_ha-058855-m04_ha-058855-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m03 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m04_ha-058855-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-058855 node stop m02 -v=7                                                     | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-058855 node start m02 -v=7                                                    | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 18:58:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 18:58:45.981713   26778 out.go:291] Setting OutFile to fd 1 ...
	I0429 18:58:45.982017   26778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:58:45.982030   26778 out.go:304] Setting ErrFile to fd 2...
	I0429 18:58:45.982037   26778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:58:45.982269   26778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 18:58:45.982917   26778 out.go:298] Setting JSON to false
	I0429 18:58:45.983844   26778 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2424,"bootTime":1714414702,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 18:58:45.983913   26778 start.go:139] virtualization: kvm guest
	I0429 18:58:45.986353   26778 out.go:177] * [ha-058855] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 18:58:45.988095   26778 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 18:58:45.988015   26778 notify.go:220] Checking for updates...
	I0429 18:58:45.991450   26778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 18:58:45.992910   26778 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:58:45.994268   26778 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:58:45.995790   26778 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 18:58:45.997240   26778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 18:58:46.005382   26778 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 18:58:46.041163   26778 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 18:58:46.042692   26778 start.go:297] selected driver: kvm2
	I0429 18:58:46.042705   26778 start.go:901] validating driver "kvm2" against <nil>
	I0429 18:58:46.042718   26778 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 18:58:46.043374   26778 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:58:46.043450   26778 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 18:58:46.058631   26778 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 18:58:46.058717   26778 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 18:58:46.059010   26778 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 18:58:46.059085   26778 cni.go:84] Creating CNI manager for ""
	I0429 18:58:46.059101   26778 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 18:58:46.059106   26778 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 18:58:46.059194   26778 start.go:340] cluster config:
	{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0429 18:58:46.059344   26778 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:58:46.062290   26778 out.go:177] * Starting "ha-058855" primary control-plane node in "ha-058855" cluster
	I0429 18:58:46.063881   26778 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 18:58:46.063918   26778 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 18:58:46.063925   26778 cache.go:56] Caching tarball of preloaded images
	I0429 18:58:46.064026   26778 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 18:58:46.064036   26778 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 18:58:46.064344   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 18:58:46.064366   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json: {Name:mk48010ce9611f8eba62bb08b5dc0da5b3034370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:58:46.064489   26778 start.go:360] acquireMachinesLock for ha-058855: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 18:58:46.064516   26778 start.go:364] duration metric: took 14.602µs to acquireMachinesLock for "ha-058855"
	I0429 18:58:46.064533   26778 start.go:93] Provisioning new machine with config: &{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 18:58:46.064590   26778 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 18:58:46.066349   26778 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 18:58:46.066478   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:58:46.066510   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:58:46.080288   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0429 18:58:46.080776   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:58:46.081375   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:58:46.081401   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:58:46.081731   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:58:46.081953   26778 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 18:58:46.082148   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:58:46.082320   26778 start.go:159] libmachine.API.Create for "ha-058855" (driver="kvm2")
	I0429 18:58:46.082360   26778 client.go:168] LocalClient.Create starting
	I0429 18:58:46.082398   26778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 18:58:46.082441   26778 main.go:141] libmachine: Decoding PEM data...
	I0429 18:58:46.082461   26778 main.go:141] libmachine: Parsing certificate...
	I0429 18:58:46.082546   26778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 18:58:46.082578   26778 main.go:141] libmachine: Decoding PEM data...
	I0429 18:58:46.082603   26778 main.go:141] libmachine: Parsing certificate...
	I0429 18:58:46.082635   26778 main.go:141] libmachine: Running pre-create checks...
	I0429 18:58:46.082648   26778 main.go:141] libmachine: (ha-058855) Calling .PreCreateCheck
	I0429 18:58:46.082977   26778 main.go:141] libmachine: (ha-058855) Calling .GetConfigRaw
	I0429 18:58:46.083418   26778 main.go:141] libmachine: Creating machine...
	I0429 18:58:46.083438   26778 main.go:141] libmachine: (ha-058855) Calling .Create
	I0429 18:58:46.083581   26778 main.go:141] libmachine: (ha-058855) Creating KVM machine...
	I0429 18:58:46.084823   26778 main.go:141] libmachine: (ha-058855) DBG | found existing default KVM network
	I0429 18:58:46.085443   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:46.085290   26801 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1e0}
	I0429 18:58:46.085462   26778 main.go:141] libmachine: (ha-058855) DBG | created network xml: 
	I0429 18:58:46.085476   26778 main.go:141] libmachine: (ha-058855) DBG | <network>
	I0429 18:58:46.085484   26778 main.go:141] libmachine: (ha-058855) DBG |   <name>mk-ha-058855</name>
	I0429 18:58:46.085493   26778 main.go:141] libmachine: (ha-058855) DBG |   <dns enable='no'/>
	I0429 18:58:46.085607   26778 main.go:141] libmachine: (ha-058855) DBG |   
	I0429 18:58:46.085628   26778 main.go:141] libmachine: (ha-058855) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 18:58:46.085637   26778 main.go:141] libmachine: (ha-058855) DBG |     <dhcp>
	I0429 18:58:46.085645   26778 main.go:141] libmachine: (ha-058855) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 18:58:46.085653   26778 main.go:141] libmachine: (ha-058855) DBG |     </dhcp>
	I0429 18:58:46.085660   26778 main.go:141] libmachine: (ha-058855) DBG |   </ip>
	I0429 18:58:46.085668   26778 main.go:141] libmachine: (ha-058855) DBG |   
	I0429 18:58:46.085674   26778 main.go:141] libmachine: (ha-058855) DBG | </network>
	I0429 18:58:46.085681   26778 main.go:141] libmachine: (ha-058855) DBG | 
	I0429 18:58:46.090762   26778 main.go:141] libmachine: (ha-058855) DBG | trying to create private KVM network mk-ha-058855 192.168.39.0/24...
	I0429 18:58:46.156910   26778 main.go:141] libmachine: (ha-058855) DBG | private KVM network mk-ha-058855 192.168.39.0/24 created
	I0429 18:58:46.156949   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:46.156890   26801 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:58:46.156961   26778 main.go:141] libmachine: (ha-058855) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855 ...
	I0429 18:58:46.156988   26778 main.go:141] libmachine: (ha-058855) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 18:58:46.157020   26778 main.go:141] libmachine: (ha-058855) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 18:58:46.384628   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:46.384497   26801 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa...
	I0429 18:58:46.506043   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:46.505915   26801 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/ha-058855.rawdisk...
	I0429 18:58:46.506095   26778 main.go:141] libmachine: (ha-058855) DBG | Writing magic tar header
	I0429 18:58:46.506117   26778 main.go:141] libmachine: (ha-058855) DBG | Writing SSH key tar header
	I0429 18:58:46.506128   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:46.506029   26801 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855 ...
	I0429 18:58:46.506190   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855
	I0429 18:58:46.506224   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 18:58:46.506241   26778 main.go:141] libmachine: (ha-058855) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855 (perms=drwx------)
	I0429 18:58:46.506254   26778 main.go:141] libmachine: (ha-058855) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 18:58:46.506260   26778 main.go:141] libmachine: (ha-058855) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 18:58:46.506267   26778 main.go:141] libmachine: (ha-058855) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 18:58:46.506274   26778 main.go:141] libmachine: (ha-058855) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 18:58:46.506283   26778 main.go:141] libmachine: (ha-058855) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 18:58:46.506294   26778 main.go:141] libmachine: (ha-058855) Creating domain...
	I0429 18:58:46.506309   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:58:46.506324   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 18:58:46.506335   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 18:58:46.506346   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home/jenkins
	I0429 18:58:46.506353   26778 main.go:141] libmachine: (ha-058855) DBG | Checking permissions on dir: /home
	I0429 18:58:46.506364   26778 main.go:141] libmachine: (ha-058855) DBG | Skipping /home - not owner
	I0429 18:58:46.507454   26778 main.go:141] libmachine: (ha-058855) define libvirt domain using xml: 
	I0429 18:58:46.507486   26778 main.go:141] libmachine: (ha-058855) <domain type='kvm'>
	I0429 18:58:46.507498   26778 main.go:141] libmachine: (ha-058855)   <name>ha-058855</name>
	I0429 18:58:46.507513   26778 main.go:141] libmachine: (ha-058855)   <memory unit='MiB'>2200</memory>
	I0429 18:58:46.507528   26778 main.go:141] libmachine: (ha-058855)   <vcpu>2</vcpu>
	I0429 18:58:46.507538   26778 main.go:141] libmachine: (ha-058855)   <features>
	I0429 18:58:46.507545   26778 main.go:141] libmachine: (ha-058855)     <acpi/>
	I0429 18:58:46.507550   26778 main.go:141] libmachine: (ha-058855)     <apic/>
	I0429 18:58:46.507557   26778 main.go:141] libmachine: (ha-058855)     <pae/>
	I0429 18:58:46.507565   26778 main.go:141] libmachine: (ha-058855)     
	I0429 18:58:46.507574   26778 main.go:141] libmachine: (ha-058855)   </features>
	I0429 18:58:46.507584   26778 main.go:141] libmachine: (ha-058855)   <cpu mode='host-passthrough'>
	I0429 18:58:46.507605   26778 main.go:141] libmachine: (ha-058855)   
	I0429 18:58:46.507620   26778 main.go:141] libmachine: (ha-058855)   </cpu>
	I0429 18:58:46.507636   26778 main.go:141] libmachine: (ha-058855)   <os>
	I0429 18:58:46.507648   26778 main.go:141] libmachine: (ha-058855)     <type>hvm</type>
	I0429 18:58:46.507657   26778 main.go:141] libmachine: (ha-058855)     <boot dev='cdrom'/>
	I0429 18:58:46.507673   26778 main.go:141] libmachine: (ha-058855)     <boot dev='hd'/>
	I0429 18:58:46.507681   26778 main.go:141] libmachine: (ha-058855)     <bootmenu enable='no'/>
	I0429 18:58:46.507685   26778 main.go:141] libmachine: (ha-058855)   </os>
	I0429 18:58:46.507694   26778 main.go:141] libmachine: (ha-058855)   <devices>
	I0429 18:58:46.507699   26778 main.go:141] libmachine: (ha-058855)     <disk type='file' device='cdrom'>
	I0429 18:58:46.507709   26778 main.go:141] libmachine: (ha-058855)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/boot2docker.iso'/>
	I0429 18:58:46.507715   26778 main.go:141] libmachine: (ha-058855)       <target dev='hdc' bus='scsi'/>
	I0429 18:58:46.507720   26778 main.go:141] libmachine: (ha-058855)       <readonly/>
	I0429 18:58:46.507725   26778 main.go:141] libmachine: (ha-058855)     </disk>
	I0429 18:58:46.507733   26778 main.go:141] libmachine: (ha-058855)     <disk type='file' device='disk'>
	I0429 18:58:46.507752   26778 main.go:141] libmachine: (ha-058855)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 18:58:46.507763   26778 main.go:141] libmachine: (ha-058855)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/ha-058855.rawdisk'/>
	I0429 18:58:46.507768   26778 main.go:141] libmachine: (ha-058855)       <target dev='hda' bus='virtio'/>
	I0429 18:58:46.507777   26778 main.go:141] libmachine: (ha-058855)     </disk>
	I0429 18:58:46.507781   26778 main.go:141] libmachine: (ha-058855)     <interface type='network'>
	I0429 18:58:46.507787   26778 main.go:141] libmachine: (ha-058855)       <source network='mk-ha-058855'/>
	I0429 18:58:46.507792   26778 main.go:141] libmachine: (ha-058855)       <model type='virtio'/>
	I0429 18:58:46.507797   26778 main.go:141] libmachine: (ha-058855)     </interface>
	I0429 18:58:46.507804   26778 main.go:141] libmachine: (ha-058855)     <interface type='network'>
	I0429 18:58:46.507816   26778 main.go:141] libmachine: (ha-058855)       <source network='default'/>
	I0429 18:58:46.507825   26778 main.go:141] libmachine: (ha-058855)       <model type='virtio'/>
	I0429 18:58:46.507839   26778 main.go:141] libmachine: (ha-058855)     </interface>
	I0429 18:58:46.507856   26778 main.go:141] libmachine: (ha-058855)     <serial type='pty'>
	I0429 18:58:46.507869   26778 main.go:141] libmachine: (ha-058855)       <target port='0'/>
	I0429 18:58:46.507875   26778 main.go:141] libmachine: (ha-058855)     </serial>
	I0429 18:58:46.507880   26778 main.go:141] libmachine: (ha-058855)     <console type='pty'>
	I0429 18:58:46.507888   26778 main.go:141] libmachine: (ha-058855)       <target type='serial' port='0'/>
	I0429 18:58:46.507893   26778 main.go:141] libmachine: (ha-058855)     </console>
	I0429 18:58:46.507900   26778 main.go:141] libmachine: (ha-058855)     <rng model='virtio'>
	I0429 18:58:46.507907   26778 main.go:141] libmachine: (ha-058855)       <backend model='random'>/dev/random</backend>
	I0429 18:58:46.507914   26778 main.go:141] libmachine: (ha-058855)     </rng>
	I0429 18:58:46.507922   26778 main.go:141] libmachine: (ha-058855)     
	I0429 18:58:46.507932   26778 main.go:141] libmachine: (ha-058855)     
	I0429 18:58:46.507950   26778 main.go:141] libmachine: (ha-058855)   </devices>
	I0429 18:58:46.507963   26778 main.go:141] libmachine: (ha-058855) </domain>
	I0429 18:58:46.507972   26778 main.go:141] libmachine: (ha-058855) 
	I0429 18:58:46.512516   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:30:77:6b in network default
	I0429 18:58:46.513053   26778 main.go:141] libmachine: (ha-058855) Ensuring networks are active...
	I0429 18:58:46.513097   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:46.513811   26778 main.go:141] libmachine: (ha-058855) Ensuring network default is active
	I0429 18:58:46.514219   26778 main.go:141] libmachine: (ha-058855) Ensuring network mk-ha-058855 is active
	I0429 18:58:46.514729   26778 main.go:141] libmachine: (ha-058855) Getting domain xml...
	I0429 18:58:46.515445   26778 main.go:141] libmachine: (ha-058855) Creating domain...
	I0429 18:58:47.715436   26778 main.go:141] libmachine: (ha-058855) Waiting to get IP...
	I0429 18:58:47.716319   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:47.716824   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:47.716864   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:47.716812   26801 retry.go:31] will retry after 294.883019ms: waiting for machine to come up
	I0429 18:58:48.013525   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:48.013974   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:48.014007   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:48.013934   26801 retry.go:31] will retry after 307.387741ms: waiting for machine to come up
	I0429 18:58:48.323461   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:48.323911   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:48.323934   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:48.323850   26801 retry.go:31] will retry after 334.207259ms: waiting for machine to come up
	I0429 18:58:48.659277   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:48.659684   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:48.659708   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:48.659648   26801 retry.go:31] will retry after 571.775593ms: waiting for machine to come up
	I0429 18:58:49.234694   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:49.235194   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:49.235221   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:49.235135   26801 retry.go:31] will retry after 502.125919ms: waiting for machine to come up
	I0429 18:58:49.738943   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:49.739428   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:49.739453   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:49.739378   26801 retry.go:31] will retry after 813.308401ms: waiting for machine to come up
	I0429 18:58:50.554246   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:50.554670   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:50.554703   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:50.554619   26801 retry.go:31] will retry after 1.177820988s: waiting for machine to come up
	I0429 18:58:51.734420   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:51.734872   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:51.734902   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:51.734817   26801 retry.go:31] will retry after 1.480258642s: waiting for machine to come up
	I0429 18:58:53.217397   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:53.217886   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:53.217905   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:53.217838   26801 retry.go:31] will retry after 1.797890934s: waiting for machine to come up
	I0429 18:58:55.018030   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:55.018466   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:55.018495   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:55.018423   26801 retry.go:31] will retry after 1.659555309s: waiting for machine to come up
	I0429 18:58:56.679239   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:56.679663   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:56.679693   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:56.679609   26801 retry.go:31] will retry after 2.631753998s: waiting for machine to come up
	I0429 18:58:59.314308   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:58:59.314778   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:58:59.314801   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:58:59.314737   26801 retry.go:31] will retry after 2.503386337s: waiting for machine to come up
	I0429 18:59:01.820186   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:01.820581   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:59:01.820608   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:59:01.820544   26801 retry.go:31] will retry after 4.232745054s: waiting for machine to come up
	I0429 18:59:06.057826   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:06.058177   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find current IP address of domain ha-058855 in network mk-ha-058855
	I0429 18:59:06.058199   26778 main.go:141] libmachine: (ha-058855) DBG | I0429 18:59:06.058134   26801 retry.go:31] will retry after 4.272974766s: waiting for machine to come up
	I0429 18:59:10.335751   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.336226   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has current primary IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.336241   26778 main.go:141] libmachine: (ha-058855) Found IP for machine: 192.168.39.52
	I0429 18:59:10.336254   26778 main.go:141] libmachine: (ha-058855) Reserving static IP address...
	I0429 18:59:10.336605   26778 main.go:141] libmachine: (ha-058855) DBG | unable to find host DHCP lease matching {name: "ha-058855", mac: "52:54:00:bf:0c:a5", ip: "192.168.39.52"} in network mk-ha-058855
	I0429 18:59:10.407735   26778 main.go:141] libmachine: (ha-058855) DBG | Getting to WaitForSSH function...
	I0429 18:59:10.407762   26778 main.go:141] libmachine: (ha-058855) Reserved static IP address: 192.168.39.52
	I0429 18:59:10.407775   26778 main.go:141] libmachine: (ha-058855) Waiting for SSH to be available...
	I0429 18:59:10.409898   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.410305   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:10.410335   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.410451   26778 main.go:141] libmachine: (ha-058855) DBG | Using SSH client type: external
	I0429 18:59:10.410480   26778 main.go:141] libmachine: (ha-058855) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa (-rw-------)
	I0429 18:59:10.410512   26778 main.go:141] libmachine: (ha-058855) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.52 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 18:59:10.410523   26778 main.go:141] libmachine: (ha-058855) DBG | About to run SSH command:
	I0429 18:59:10.410550   26778 main.go:141] libmachine: (ha-058855) DBG | exit 0
	I0429 18:59:10.538010   26778 main.go:141] libmachine: (ha-058855) DBG | SSH cmd err, output: <nil>: 
	I0429 18:59:10.538317   26778 main.go:141] libmachine: (ha-058855) KVM machine creation complete!
	I0429 18:59:10.538640   26778 main.go:141] libmachine: (ha-058855) Calling .GetConfigRaw
	I0429 18:59:10.539113   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:10.539325   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:10.539469   26778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 18:59:10.539487   26778 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 18:59:10.540716   26778 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 18:59:10.540733   26778 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 18:59:10.540741   26778 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 18:59:10.540748   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:10.542802   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.543156   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:10.543178   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.543291   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:10.543460   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.543599   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.543743   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:10.543893   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 18:59:10.544113   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 18:59:10.544125   26778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 18:59:10.653739   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 18:59:10.653771   26778 main.go:141] libmachine: Detecting the provisioner...
	I0429 18:59:10.653784   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:10.656716   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.657192   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:10.657220   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.657378   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:10.657611   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.657816   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.657959   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:10.658145   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 18:59:10.658304   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 18:59:10.658314   26778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 18:59:10.771272   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 18:59:10.771353   26778 main.go:141] libmachine: found compatible host: buildroot
	I0429 18:59:10.771372   26778 main.go:141] libmachine: Provisioning with buildroot...
	I0429 18:59:10.771382   26778 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 18:59:10.771603   26778 buildroot.go:166] provisioning hostname "ha-058855"
	I0429 18:59:10.771625   26778 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 18:59:10.771832   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:10.774384   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.774652   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:10.774680   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.774825   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:10.774998   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.775152   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.775291   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:10.775441   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 18:59:10.775622   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 18:59:10.775644   26778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-058855 && echo "ha-058855" | sudo tee /etc/hostname
	I0429 18:59:10.907073   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-058855
	
	I0429 18:59:10.907102   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:10.909812   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.910149   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:10.910175   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:10.910338   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:10.910522   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.910657   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:10.910756   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:10.910877   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 18:59:10.911068   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 18:59:10.911087   26778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-058855' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-058855/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-058855' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 18:59:11.033157   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 18:59:11.033184   26778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 18:59:11.033210   26778 buildroot.go:174] setting up certificates
	I0429 18:59:11.033224   26778 provision.go:84] configureAuth start
	I0429 18:59:11.033238   26778 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 18:59:11.033492   26778 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 18:59:11.035787   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.036077   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.036105   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.036231   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.037934   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.038280   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.038310   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.038437   26778 provision.go:143] copyHostCerts
	I0429 18:59:11.038468   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 18:59:11.038501   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 18:59:11.038510   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 18:59:11.038577   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 18:59:11.038671   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 18:59:11.038688   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 18:59:11.038695   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 18:59:11.038732   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 18:59:11.038776   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 18:59:11.038792   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 18:59:11.038799   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 18:59:11.038818   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 18:59:11.038863   26778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.ha-058855 san=[127.0.0.1 192.168.39.52 ha-058855 localhost minikube]
	I0429 18:59:11.182794   26778 provision.go:177] copyRemoteCerts
	I0429 18:59:11.182851   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 18:59:11.182875   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.185284   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.185569   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.185598   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.185753   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:11.185951   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.186242   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:11.186394   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 18:59:11.273680   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 18:59:11.273764   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 18:59:11.299852   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 18:59:11.299907   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0429 18:59:11.325706   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 18:59:11.325772   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 18:59:11.351636   26778 provision.go:87] duration metric: took 318.397502ms to configureAuth
	I0429 18:59:11.351665   26778 buildroot.go:189] setting minikube options for container-runtime
	I0429 18:59:11.351840   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:59:11.351913   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.354032   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.354302   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.354337   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.354455   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:11.354642   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.354845   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.354990   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:11.355156   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 18:59:11.355310   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 18:59:11.355326   26778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 18:59:11.637312   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 18:59:11.637335   26778 main.go:141] libmachine: Checking connection to Docker...
	I0429 18:59:11.637343   26778 main.go:141] libmachine: (ha-058855) Calling .GetURL
	I0429 18:59:11.638553   26778 main.go:141] libmachine: (ha-058855) DBG | Using libvirt version 6000000
	I0429 18:59:11.640422   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.640675   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.640702   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.640873   26778 main.go:141] libmachine: Docker is up and running!
	I0429 18:59:11.640889   26778 main.go:141] libmachine: Reticulating splines...
	I0429 18:59:11.640895   26778 client.go:171] duration metric: took 25.558524436s to LocalClient.Create
	I0429 18:59:11.640918   26778 start.go:167] duration metric: took 25.558599994s to libmachine.API.Create "ha-058855"
	I0429 18:59:11.640933   26778 start.go:293] postStartSetup for "ha-058855" (driver="kvm2")
	I0429 18:59:11.640945   26778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 18:59:11.640960   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:11.641191   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 18:59:11.641212   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.643096   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.643389   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.643411   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.643515   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:11.643725   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.643870   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:11.644003   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 18:59:11.729083   26778 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 18:59:11.733711   26778 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 18:59:11.733734   26778 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 18:59:11.733784   26778 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 18:59:11.733870   26778 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 18:59:11.733881   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /etc/ssl/certs/151242.pem
	I0429 18:59:11.733969   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 18:59:11.743613   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 18:59:11.770152   26778 start.go:296] duration metric: took 129.204352ms for postStartSetup
	I0429 18:59:11.770203   26778 main.go:141] libmachine: (ha-058855) Calling .GetConfigRaw
	I0429 18:59:11.770756   26778 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 18:59:11.773181   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.773512   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.773541   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.773756   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 18:59:11.773945   26778 start.go:128] duration metric: took 25.709346707s to createHost
	I0429 18:59:11.773976   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.776279   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.776624   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.776654   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.776800   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:11.776996   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.777146   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.777278   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:11.777432   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 18:59:11.777587   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 18:59:11.777601   26778 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 18:59:11.891562   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714417151.879654551
	
	I0429 18:59:11.891593   26778 fix.go:216] guest clock: 1714417151.879654551
	I0429 18:59:11.891602   26778 fix.go:229] Guest: 2024-04-29 18:59:11.879654551 +0000 UTC Remote: 2024-04-29 18:59:11.773965638 +0000 UTC m=+25.839178511 (delta=105.688913ms)
	I0429 18:59:11.891648   26778 fix.go:200] guest clock delta is within tolerance: 105.688913ms
	I0429 18:59:11.891653   26778 start.go:83] releasing machines lock for "ha-058855", held for 25.827128697s
	I0429 18:59:11.891674   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:11.891975   26778 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 18:59:11.894291   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.894604   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.894631   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.894744   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:11.895325   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:11.895490   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:11.895573   26778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 18:59:11.895615   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.895723   26778 ssh_runner.go:195] Run: cat /version.json
	I0429 18:59:11.895749   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:11.898017   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.898261   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.898293   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.898312   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.898441   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:11.898618   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.898660   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:11.898694   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:11.898795   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:11.898850   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:11.898933   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 18:59:11.899005   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:11.899114   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:11.899217   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 18:59:11.980019   26778 ssh_runner.go:195] Run: systemctl --version
	I0429 18:59:12.005681   26778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 18:59:12.171140   26778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 18:59:12.177944   26778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 18:59:12.178009   26778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 18:59:12.197532   26778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 18:59:12.197559   26778 start.go:494] detecting cgroup driver to use...
	I0429 18:59:12.197626   26778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 18:59:12.215950   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 18:59:12.230970   26778 docker.go:217] disabling cri-docker service (if available) ...
	I0429 18:59:12.231018   26778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 18:59:12.245693   26778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 18:59:12.259626   26778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 18:59:12.384318   26778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 18:59:12.537847   26778 docker.go:233] disabling docker service ...
	I0429 18:59:12.537927   26778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 18:59:12.553895   26778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 18:59:12.568500   26778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 18:59:12.700131   26778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 18:59:12.839476   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 18:59:12.855048   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 18:59:12.875486   26778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 18:59:12.875565   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.886836   26778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 18:59:12.886899   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.898135   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.908886   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.920104   26778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 18:59:12.931187   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.942089   26778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.961928   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 18:59:12.974299   26778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 18:59:12.985323   26778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 18:59:12.985366   26778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 18:59:12.999894   26778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 18:59:13.011289   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:59:13.150511   26778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 18:59:13.304012   26778 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 18:59:13.304087   26778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 18:59:13.309763   26778 start.go:562] Will wait 60s for crictl version
	I0429 18:59:13.309832   26778 ssh_runner.go:195] Run: which crictl
	I0429 18:59:13.314458   26778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 18:59:13.357508   26778 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 18:59:13.357611   26778 ssh_runner.go:195] Run: crio --version
	I0429 18:59:13.390289   26778 ssh_runner.go:195] Run: crio --version
	I0429 18:59:13.424211   26778 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 18:59:13.425715   26778 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 18:59:13.428241   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:13.428590   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:13.428621   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:13.428841   26778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 18:59:13.433495   26778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 18:59:13.447818   26778 kubeadm.go:877] updating cluster {Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 18:59:13.447940   26778 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 18:59:13.447983   26778 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 18:59:13.483877   26778 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 18:59:13.483944   26778 ssh_runner.go:195] Run: which lz4
	I0429 18:59:13.488489   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 18:59:13.488585   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 18:59:13.493494   26778 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 18:59:13.493532   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 18:59:15.200881   26778 crio.go:462] duration metric: took 1.712326187s to copy over tarball
	I0429 18:59:15.200951   26778 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 18:59:17.696525   26778 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.49555051s)
	I0429 18:59:17.696555   26778 crio.go:469] duration metric: took 2.495646439s to extract the tarball
	I0429 18:59:17.696562   26778 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 18:59:17.736827   26778 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 18:59:17.786117   26778 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 18:59:17.786142   26778 cache_images.go:84] Images are preloaded, skipping loading
	I0429 18:59:17.786151   26778 kubeadm.go:928] updating node { 192.168.39.52 8443 v1.30.0 crio true true} ...
	I0429 18:59:17.786291   26778 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-058855 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 18:59:17.786379   26778 ssh_runner.go:195] Run: crio config
	I0429 18:59:17.844413   26778 cni.go:84] Creating CNI manager for ""
	I0429 18:59:17.844436   26778 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 18:59:17.844448   26778 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 18:59:17.844466   26778 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.52 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-058855 NodeName:ha-058855 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 18:59:17.844603   26778 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-058855"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 18:59:17.844627   26778 kube-vip.go:115] generating kube-vip config ...
	I0429 18:59:17.844665   26778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 18:59:17.865139   26778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 18:59:17.865253   26778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 18:59:17.865324   26778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 18:59:17.876875   26778 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 18:59:17.876940   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 18:59:17.887859   26778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 18:59:17.907865   26778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 18:59:17.927443   26778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0429 18:59:17.946838   26778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0429 18:59:17.965580   26778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 18:59:17.970566   26778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 18:59:17.985377   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 18:59:18.107795   26778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 18:59:18.126577   26778 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855 for IP: 192.168.39.52
	I0429 18:59:18.126602   26778 certs.go:194] generating shared ca certs ...
	I0429 18:59:18.126623   26778 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.126802   26778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 18:59:18.126863   26778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 18:59:18.126877   26778 certs.go:256] generating profile certs ...
	I0429 18:59:18.126972   26778 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key
	I0429 18:59:18.126992   26778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.crt with IP's: []
	I0429 18:59:18.338614   26778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.crt ...
	I0429 18:59:18.338646   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.crt: {Name:mk2faac6a398f89a4d1a9a126033354d7bde59ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.338808   26778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key ...
	I0429 18:59:18.338819   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key: {Name:mk8227aad5a8167db33cc520c292f679014a0ac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.338891   26778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.c5afc2ae
	I0429 18:59:18.338906   26778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.c5afc2ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.52 192.168.39.254]
	I0429 18:59:18.439619   26778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.c5afc2ae ...
	I0429 18:59:18.439652   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.c5afc2ae: {Name:mk221dd4b271f1fdbc86793831f6fbf5460f8563 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.439803   26778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.c5afc2ae ...
	I0429 18:59:18.439816   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.c5afc2ae: {Name:mkbb96d6ff3ce7f1d2a0cef765d216fc115a5b2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.439889   26778 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.c5afc2ae -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt
	I0429 18:59:18.439978   26778 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.c5afc2ae -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key
	I0429 18:59:18.440043   26778 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key
	I0429 18:59:18.440060   26778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt with IP's: []
	I0429 18:59:18.703344   26778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt ...
	I0429 18:59:18.703376   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt: {Name:mkbac1bb5ff240a8f048a4dd619a346b31d7eb7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.703534   26778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key ...
	I0429 18:59:18.703546   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key: {Name:mk0ac8bd499ced3b4ca1180a4958b246d94e3c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:18.703614   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 18:59:18.703632   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 18:59:18.703642   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 18:59:18.703658   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 18:59:18.703671   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 18:59:18.703689   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 18:59:18.703702   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 18:59:18.703713   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 18:59:18.703768   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 18:59:18.703801   26778 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 18:59:18.703811   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 18:59:18.703840   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 18:59:18.703864   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 18:59:18.703890   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 18:59:18.703924   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 18:59:18.703951   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:59:18.703965   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem -> /usr/share/ca-certificates/15124.pem
	I0429 18:59:18.703976   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /usr/share/ca-certificates/151242.pem
	I0429 18:59:18.704540   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 18:59:18.738263   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 18:59:18.770530   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 18:59:18.800850   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 18:59:18.832435   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 18:59:18.864115   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 18:59:18.893193   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 18:59:18.923614   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 18:59:18.965251   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 18:59:18.996332   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 18:59:19.027163   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 18:59:19.054628   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 18:59:19.074995   26778 ssh_runner.go:195] Run: openssl version
	I0429 18:59:19.081512   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 18:59:19.093922   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:59:19.099812   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:59:19.099870   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 18:59:19.107286   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 18:59:19.120482   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 18:59:19.133291   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 18:59:19.140245   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 18:59:19.140301   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 18:59:19.147128   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 18:59:19.159843   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 18:59:19.172499   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 18:59:19.177841   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 18:59:19.177894   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 18:59:19.185051   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 18:59:19.197472   26778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 18:59:19.202888   26778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 18:59:19.202948   26778 kubeadm.go:391] StartCluster: {Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:59:19.203039   26778 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 18:59:19.203081   26778 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 18:59:19.244733   26778 cri.go:89] found id: ""
	I0429 18:59:19.244820   26778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 18:59:19.256733   26778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 18:59:19.268768   26778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 18:59:19.280826   26778 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 18:59:19.280846   26778 kubeadm.go:156] found existing configuration files:
	
	I0429 18:59:19.280900   26778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 18:59:19.292679   26778 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 18:59:19.292743   26778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 18:59:19.304280   26778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 18:59:19.315309   26778 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 18:59:19.315361   26778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 18:59:19.326958   26778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 18:59:19.338190   26778 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 18:59:19.338249   26778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 18:59:19.350650   26778 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 18:59:19.361659   26778 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 18:59:19.361746   26778 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 18:59:19.372592   26778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 18:59:19.618465   26778 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 18:59:30.028037   26778 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 18:59:30.028108   26778 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 18:59:30.028199   26778 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 18:59:30.028318   26778 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 18:59:30.028407   26778 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 18:59:30.028486   26778 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 18:59:30.030108   26778 out.go:204]   - Generating certificates and keys ...
	I0429 18:59:30.030197   26778 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 18:59:30.030273   26778 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 18:59:30.030370   26778 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 18:59:30.030453   26778 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 18:59:30.030545   26778 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 18:59:30.030607   26778 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 18:59:30.030668   26778 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 18:59:30.030831   26778 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-058855 localhost] and IPs [192.168.39.52 127.0.0.1 ::1]
	I0429 18:59:30.030876   26778 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 18:59:30.030985   26778 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-058855 localhost] and IPs [192.168.39.52 127.0.0.1 ::1]
	I0429 18:59:30.031049   26778 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 18:59:30.031102   26778 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 18:59:30.031141   26778 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 18:59:30.031191   26778 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 18:59:30.031241   26778 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 18:59:30.031288   26778 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 18:59:30.031352   26778 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 18:59:30.031422   26778 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 18:59:30.031480   26778 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 18:59:30.031567   26778 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 18:59:30.031623   26778 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 18:59:30.033419   26778 out.go:204]   - Booting up control plane ...
	I0429 18:59:30.033509   26778 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 18:59:30.033594   26778 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 18:59:30.033675   26778 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 18:59:30.033813   26778 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 18:59:30.033931   26778 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 18:59:30.033984   26778 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 18:59:30.034182   26778 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 18:59:30.034277   26778 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 18:59:30.034376   26778 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.301863ms
	I0429 18:59:30.034492   26778 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 18:59:30.034588   26778 kubeadm.go:309] [api-check] The API server is healthy after 5.911240016s
	I0429 18:59:30.034723   26778 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 18:59:30.034857   26778 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 18:59:30.034933   26778 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 18:59:30.035125   26778 kubeadm.go:309] [mark-control-plane] Marking the node ha-058855 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 18:59:30.035184   26778 kubeadm.go:309] [bootstrap-token] Using token: 87ht6r.s99wm15bpluoriwx
	I0429 18:59:30.036692   26778 out.go:204]   - Configuring RBAC rules ...
	I0429 18:59:30.036773   26778 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 18:59:30.036885   26778 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 18:59:30.037056   26778 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 18:59:30.037226   26778 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 18:59:30.037399   26778 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 18:59:30.037490   26778 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 18:59:30.037651   26778 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 18:59:30.037708   26778 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 18:59:30.037782   26778 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 18:59:30.037794   26778 kubeadm.go:309] 
	I0429 18:59:30.037849   26778 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 18:59:30.037856   26778 kubeadm.go:309] 
	I0429 18:59:30.037942   26778 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 18:59:30.037953   26778 kubeadm.go:309] 
	I0429 18:59:30.038007   26778 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 18:59:30.038087   26778 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 18:59:30.038177   26778 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 18:59:30.038193   26778 kubeadm.go:309] 
	I0429 18:59:30.038265   26778 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 18:59:30.038275   26778 kubeadm.go:309] 
	I0429 18:59:30.038349   26778 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 18:59:30.038360   26778 kubeadm.go:309] 
	I0429 18:59:30.038447   26778 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 18:59:30.038513   26778 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 18:59:30.038610   26778 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 18:59:30.038620   26778 kubeadm.go:309] 
	I0429 18:59:30.038743   26778 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 18:59:30.038817   26778 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 18:59:30.038824   26778 kubeadm.go:309] 
	I0429 18:59:30.038900   26778 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 87ht6r.s99wm15bpluoriwx \
	I0429 18:59:30.038992   26778 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 18:59:30.039012   26778 kubeadm.go:309] 	--control-plane 
	I0429 18:59:30.039018   26778 kubeadm.go:309] 
	I0429 18:59:30.039089   26778 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 18:59:30.039096   26778 kubeadm.go:309] 
	I0429 18:59:30.039161   26778 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 87ht6r.s99wm15bpluoriwx \
	I0429 18:59:30.039266   26778 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 18:59:30.039281   26778 cni.go:84] Creating CNI manager for ""
	I0429 18:59:30.039291   26778 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 18:59:30.040841   26778 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 18:59:30.042158   26778 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 18:59:30.048525   26778 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 18:59:30.048539   26778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 18:59:30.070126   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 18:59:30.430317   26778 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 18:59:30.430412   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:30.430416   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-058855 minikube.k8s.io/updated_at=2024_04_29T18_59_30_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-058855 minikube.k8s.io/primary=true
	I0429 18:59:30.459757   26778 ops.go:34] apiserver oom_adj: -16
	I0429 18:59:30.608077   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:31.108108   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:31.608142   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:32.108659   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:32.608230   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:33.108096   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:33.608472   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:34.108870   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:34.608987   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:35.108341   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:35.608834   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:36.108993   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:36.608073   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:37.108300   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:37.608224   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:38.108782   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:38.608925   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:39.108264   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:39.608942   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:40.108969   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:40.609022   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:41.108330   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:41.609130   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:42.108697   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:42.608132   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 18:59:42.734083   26778 kubeadm.go:1107] duration metric: took 12.303712997s to wait for elevateKubeSystemPrivileges
	W0429 18:59:42.734123   26778 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 18:59:42.734130   26778 kubeadm.go:393] duration metric: took 23.531186894s to StartCluster
	I0429 18:59:42.734151   26778 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:42.734237   26778 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:59:42.735028   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:59:42.735272   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 18:59:42.735283   26778 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 18:59:42.735309   26778 start.go:240] waiting for startup goroutines ...
	I0429 18:59:42.735325   26778 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 18:59:42.735401   26778 addons.go:69] Setting storage-provisioner=true in profile "ha-058855"
	I0429 18:59:42.735414   26778 addons.go:69] Setting default-storageclass=true in profile "ha-058855"
	I0429 18:59:42.735429   26778 addons.go:234] Setting addon storage-provisioner=true in "ha-058855"
	I0429 18:59:42.735455   26778 host.go:66] Checking if "ha-058855" exists ...
	I0429 18:59:42.735457   26778 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-058855"
	I0429 18:59:42.735833   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:59:42.735868   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:59:42.735953   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:59:42.736148   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:59:42.736196   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:59:42.751199   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40963
	I0429 18:59:42.751292   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36175
	I0429 18:59:42.751664   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:59:42.751666   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:59:42.752185   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:59:42.752208   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:59:42.752213   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:59:42.752224   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:59:42.752551   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:59:42.752599   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:59:42.752731   26778 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 18:59:42.753179   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:59:42.753226   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:59:42.755004   26778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:59:42.755344   26778 kapi.go:59] client config for ha-058855: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.crt", KeyFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key", CAFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 18:59:42.755988   26778 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 18:59:42.756170   26778 addons.go:234] Setting addon default-storageclass=true in "ha-058855"
	I0429 18:59:42.756212   26778 host.go:66] Checking if "ha-058855" exists ...
	I0429 18:59:42.756592   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:59:42.756656   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:59:42.769101   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0429 18:59:42.769648   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:59:42.770223   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:59:42.770247   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:59:42.770593   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:59:42.770769   26778 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 18:59:42.772656   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:42.772805   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I0429 18:59:42.774706   26778 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 18:59:42.773155   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:59:42.776139   26778 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 18:59:42.776157   26778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 18:59:42.776178   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:42.776643   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:59:42.776669   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:59:42.777037   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:59:42.777574   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:59:42.777606   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:59:42.779575   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:42.780083   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:42.780107   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:42.780285   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:42.780540   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:42.780750   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:42.780939   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 18:59:42.792857   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42357
	I0429 18:59:42.793344   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:59:42.793830   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:59:42.793856   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:59:42.794190   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:59:42.794376   26778 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 18:59:42.795993   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 18:59:42.796275   26778 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 18:59:42.796289   26778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 18:59:42.796309   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 18:59:42.799193   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:42.799566   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 18:59:42.799593   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 18:59:42.799734   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 18:59:42.799905   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 18:59:42.800036   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 18:59:42.800140   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 18:59:42.875740   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 18:59:42.951882   26778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 18:59:42.961400   26778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 18:59:43.179595   26778 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0429 18:59:43.231394   26778 main.go:141] libmachine: Making call to close driver server
	I0429 18:59:43.231417   26778 main.go:141] libmachine: (ha-058855) Calling .Close
	I0429 18:59:43.231721   26778 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:59:43.231738   26778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:59:43.231746   26778 main.go:141] libmachine: Making call to close driver server
	I0429 18:59:43.231753   26778 main.go:141] libmachine: (ha-058855) Calling .Close
	I0429 18:59:43.232018   26778 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:59:43.232044   26778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:59:43.232050   26778 main.go:141] libmachine: (ha-058855) DBG | Closing plugin on server side
	I0429 18:59:43.232186   26778 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 18:59:43.232197   26778 round_trippers.go:469] Request Headers:
	I0429 18:59:43.232207   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 18:59:43.232215   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 18:59:43.246447   26778 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0429 18:59:43.247025   26778 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 18:59:43.247040   26778 round_trippers.go:469] Request Headers:
	I0429 18:59:43.247048   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 18:59:43.247055   26778 round_trippers.go:473]     Content-Type: application/json
	I0429 18:59:43.247058   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 18:59:43.249672   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 18:59:43.249819   26778 main.go:141] libmachine: Making call to close driver server
	I0429 18:59:43.249833   26778 main.go:141] libmachine: (ha-058855) Calling .Close
	I0429 18:59:43.250168   26778 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:59:43.250186   26778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:59:43.250220   26778 main.go:141] libmachine: (ha-058855) DBG | Closing plugin on server side
	I0429 18:59:43.426209   26778 main.go:141] libmachine: Making call to close driver server
	I0429 18:59:43.426230   26778 main.go:141] libmachine: (ha-058855) Calling .Close
	I0429 18:59:43.426534   26778 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:59:43.426551   26778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:59:43.426559   26778 main.go:141] libmachine: Making call to close driver server
	I0429 18:59:43.426568   26778 main.go:141] libmachine: (ha-058855) Calling .Close
	I0429 18:59:43.426792   26778 main.go:141] libmachine: Successfully made call to close driver server
	I0429 18:59:43.426805   26778 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 18:59:43.426824   26778 main.go:141] libmachine: (ha-058855) DBG | Closing plugin on server side
	I0429 18:59:43.429848   26778 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0429 18:59:43.431302   26778 addons.go:505] duration metric: took 695.978638ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0429 18:59:43.431352   26778 start.go:245] waiting for cluster config update ...
	I0429 18:59:43.431367   26778 start.go:254] writing updated cluster config ...
	I0429 18:59:43.433377   26778 out.go:177] 
	I0429 18:59:43.434775   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:59:43.434879   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 18:59:43.436380   26778 out.go:177] * Starting "ha-058855-m02" control-plane node in "ha-058855" cluster
	I0429 18:59:43.437851   26778 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 18:59:43.437882   26778 cache.go:56] Caching tarball of preloaded images
	I0429 18:59:43.438004   26778 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 18:59:43.438021   26778 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 18:59:43.438126   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 18:59:43.438342   26778 start.go:360] acquireMachinesLock for ha-058855-m02: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 18:59:43.438404   26778 start.go:364] duration metric: took 34.364µs to acquireMachinesLock for "ha-058855-m02"
	I0429 18:59:43.438429   26778 start.go:93] Provisioning new machine with config: &{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 18:59:43.438544   26778 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0429 18:59:43.440136   26778 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 18:59:43.440239   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:59:43.440278   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:59:43.454725   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41979
	I0429 18:59:43.455136   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:59:43.455597   26778 main.go:141] libmachine: Using API Version  1
	I0429 18:59:43.455618   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:59:43.455999   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:59:43.456230   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetMachineName
	I0429 18:59:43.456447   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 18:59:43.456633   26778 start.go:159] libmachine.API.Create for "ha-058855" (driver="kvm2")
	I0429 18:59:43.456651   26778 client.go:168] LocalClient.Create starting
	I0429 18:59:43.456749   26778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 18:59:43.456810   26778 main.go:141] libmachine: Decoding PEM data...
	I0429 18:59:43.456831   26778 main.go:141] libmachine: Parsing certificate...
	I0429 18:59:43.456899   26778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 18:59:43.456925   26778 main.go:141] libmachine: Decoding PEM data...
	I0429 18:59:43.456940   26778 main.go:141] libmachine: Parsing certificate...
	I0429 18:59:43.456980   26778 main.go:141] libmachine: Running pre-create checks...
	I0429 18:59:43.456990   26778 main.go:141] libmachine: (ha-058855-m02) Calling .PreCreateCheck
	I0429 18:59:43.457176   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetConfigRaw
	I0429 18:59:43.457573   26778 main.go:141] libmachine: Creating machine...
	I0429 18:59:43.457586   26778 main.go:141] libmachine: (ha-058855-m02) Calling .Create
	I0429 18:59:43.457724   26778 main.go:141] libmachine: (ha-058855-m02) Creating KVM machine...
	I0429 18:59:43.459160   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found existing default KVM network
	I0429 18:59:43.459330   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found existing private KVM network mk-ha-058855
	I0429 18:59:43.459488   26778 main.go:141] libmachine: (ha-058855-m02) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02 ...
	I0429 18:59:43.459509   26778 main.go:141] libmachine: (ha-058855-m02) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 18:59:43.459561   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:43.459456   27419 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:59:43.459654   26778 main.go:141] libmachine: (ha-058855-m02) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 18:59:43.678395   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:43.678266   27419 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa...
	I0429 18:59:43.975573   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:43.975423   27419 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/ha-058855-m02.rawdisk...
	I0429 18:59:43.975604   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Writing magic tar header
	I0429 18:59:43.975619   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Writing SSH key tar header
	I0429 18:59:43.975636   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:43.975546   27419 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02 ...
	I0429 18:59:43.975654   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02
	I0429 18:59:43.975688   26778 main.go:141] libmachine: (ha-058855-m02) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02 (perms=drwx------)
	I0429 18:59:43.975709   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 18:59:43.975725   26778 main.go:141] libmachine: (ha-058855-m02) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 18:59:43.975740   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:59:43.975773   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 18:59:43.975788   26778 main.go:141] libmachine: (ha-058855-m02) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 18:59:43.975808   26778 main.go:141] libmachine: (ha-058855-m02) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 18:59:43.975821   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 18:59:43.975836   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home/jenkins
	I0429 18:59:43.975847   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Checking permissions on dir: /home
	I0429 18:59:43.975858   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Skipping /home - not owner
	I0429 18:59:43.975874   26778 main.go:141] libmachine: (ha-058855-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 18:59:43.975893   26778 main.go:141] libmachine: (ha-058855-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 18:59:43.975907   26778 main.go:141] libmachine: (ha-058855-m02) Creating domain...
	I0429 18:59:43.976661   26778 main.go:141] libmachine: (ha-058855-m02) define libvirt domain using xml: 
	I0429 18:59:43.976680   26778 main.go:141] libmachine: (ha-058855-m02) <domain type='kvm'>
	I0429 18:59:43.976687   26778 main.go:141] libmachine: (ha-058855-m02)   <name>ha-058855-m02</name>
	I0429 18:59:43.976692   26778 main.go:141] libmachine: (ha-058855-m02)   <memory unit='MiB'>2200</memory>
	I0429 18:59:43.976698   26778 main.go:141] libmachine: (ha-058855-m02)   <vcpu>2</vcpu>
	I0429 18:59:43.976705   26778 main.go:141] libmachine: (ha-058855-m02)   <features>
	I0429 18:59:43.976711   26778 main.go:141] libmachine: (ha-058855-m02)     <acpi/>
	I0429 18:59:43.976715   26778 main.go:141] libmachine: (ha-058855-m02)     <apic/>
	I0429 18:59:43.976723   26778 main.go:141] libmachine: (ha-058855-m02)     <pae/>
	I0429 18:59:43.976744   26778 main.go:141] libmachine: (ha-058855-m02)     
	I0429 18:59:43.976756   26778 main.go:141] libmachine: (ha-058855-m02)   </features>
	I0429 18:59:43.976762   26778 main.go:141] libmachine: (ha-058855-m02)   <cpu mode='host-passthrough'>
	I0429 18:59:43.976769   26778 main.go:141] libmachine: (ha-058855-m02)   
	I0429 18:59:43.976779   26778 main.go:141] libmachine: (ha-058855-m02)   </cpu>
	I0429 18:59:43.976787   26778 main.go:141] libmachine: (ha-058855-m02)   <os>
	I0429 18:59:43.976791   26778 main.go:141] libmachine: (ha-058855-m02)     <type>hvm</type>
	I0429 18:59:43.976796   26778 main.go:141] libmachine: (ha-058855-m02)     <boot dev='cdrom'/>
	I0429 18:59:43.976801   26778 main.go:141] libmachine: (ha-058855-m02)     <boot dev='hd'/>
	I0429 18:59:43.976808   26778 main.go:141] libmachine: (ha-058855-m02)     <bootmenu enable='no'/>
	I0429 18:59:43.976819   26778 main.go:141] libmachine: (ha-058855-m02)   </os>
	I0429 18:59:43.976843   26778 main.go:141] libmachine: (ha-058855-m02)   <devices>
	I0429 18:59:43.976859   26778 main.go:141] libmachine: (ha-058855-m02)     <disk type='file' device='cdrom'>
	I0429 18:59:43.976870   26778 main.go:141] libmachine: (ha-058855-m02)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/boot2docker.iso'/>
	I0429 18:59:43.976887   26778 main.go:141] libmachine: (ha-058855-m02)       <target dev='hdc' bus='scsi'/>
	I0429 18:59:43.976896   26778 main.go:141] libmachine: (ha-058855-m02)       <readonly/>
	I0429 18:59:43.976904   26778 main.go:141] libmachine: (ha-058855-m02)     </disk>
	I0429 18:59:43.976931   26778 main.go:141] libmachine: (ha-058855-m02)     <disk type='file' device='disk'>
	I0429 18:59:43.976965   26778 main.go:141] libmachine: (ha-058855-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 18:59:43.976982   26778 main.go:141] libmachine: (ha-058855-m02)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/ha-058855-m02.rawdisk'/>
	I0429 18:59:43.976992   26778 main.go:141] libmachine: (ha-058855-m02)       <target dev='hda' bus='virtio'/>
	I0429 18:59:43.977002   26778 main.go:141] libmachine: (ha-058855-m02)     </disk>
	I0429 18:59:43.977010   26778 main.go:141] libmachine: (ha-058855-m02)     <interface type='network'>
	I0429 18:59:43.977022   26778 main.go:141] libmachine: (ha-058855-m02)       <source network='mk-ha-058855'/>
	I0429 18:59:43.977031   26778 main.go:141] libmachine: (ha-058855-m02)       <model type='virtio'/>
	I0429 18:59:43.977036   26778 main.go:141] libmachine: (ha-058855-m02)     </interface>
	I0429 18:59:43.977047   26778 main.go:141] libmachine: (ha-058855-m02)     <interface type='network'>
	I0429 18:59:43.977061   26778 main.go:141] libmachine: (ha-058855-m02)       <source network='default'/>
	I0429 18:59:43.977077   26778 main.go:141] libmachine: (ha-058855-m02)       <model type='virtio'/>
	I0429 18:59:43.977090   26778 main.go:141] libmachine: (ha-058855-m02)     </interface>
	I0429 18:59:43.977101   26778 main.go:141] libmachine: (ha-058855-m02)     <serial type='pty'>
	I0429 18:59:43.977110   26778 main.go:141] libmachine: (ha-058855-m02)       <target port='0'/>
	I0429 18:59:43.977120   26778 main.go:141] libmachine: (ha-058855-m02)     </serial>
	I0429 18:59:43.977129   26778 main.go:141] libmachine: (ha-058855-m02)     <console type='pty'>
	I0429 18:59:43.977144   26778 main.go:141] libmachine: (ha-058855-m02)       <target type='serial' port='0'/>
	I0429 18:59:43.977159   26778 main.go:141] libmachine: (ha-058855-m02)     </console>
	I0429 18:59:43.977171   26778 main.go:141] libmachine: (ha-058855-m02)     <rng model='virtio'>
	I0429 18:59:43.977190   26778 main.go:141] libmachine: (ha-058855-m02)       <backend model='random'>/dev/random</backend>
	I0429 18:59:43.977200   26778 main.go:141] libmachine: (ha-058855-m02)     </rng>
	I0429 18:59:43.977208   26778 main.go:141] libmachine: (ha-058855-m02)     
	I0429 18:59:43.977216   26778 main.go:141] libmachine: (ha-058855-m02)     
	I0429 18:59:43.977222   26778 main.go:141] libmachine: (ha-058855-m02)   </devices>
	I0429 18:59:43.977232   26778 main.go:141] libmachine: (ha-058855-m02) </domain>
	I0429 18:59:43.977239   26778 main.go:141] libmachine: (ha-058855-m02) 
	I0429 18:59:43.983852   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:40:82:e8 in network default
	I0429 18:59:43.984371   26778 main.go:141] libmachine: (ha-058855-m02) Ensuring networks are active...
	I0429 18:59:43.984389   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:43.985119   26778 main.go:141] libmachine: (ha-058855-m02) Ensuring network default is active
	I0429 18:59:43.985436   26778 main.go:141] libmachine: (ha-058855-m02) Ensuring network mk-ha-058855 is active
	I0429 18:59:43.985884   26778 main.go:141] libmachine: (ha-058855-m02) Getting domain xml...
	I0429 18:59:43.986602   26778 main.go:141] libmachine: (ha-058855-m02) Creating domain...
	I0429 18:59:45.231264   26778 main.go:141] libmachine: (ha-058855-m02) Waiting to get IP...
	I0429 18:59:45.232027   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:45.232439   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:45.232479   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:45.232435   27419 retry.go:31] will retry after 288.019954ms: waiting for machine to come up
	I0429 18:59:45.522141   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:45.522695   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:45.522720   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:45.522669   27419 retry.go:31] will retry after 341.352877ms: waiting for machine to come up
	I0429 18:59:45.865224   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:45.865742   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:45.865772   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:45.865704   27419 retry.go:31] will retry after 428.945282ms: waiting for machine to come up
	I0429 18:59:46.296241   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:46.296599   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:46.296619   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:46.296581   27419 retry.go:31] will retry after 543.34325ms: waiting for machine to come up
	I0429 18:59:46.841376   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:46.841802   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:46.841829   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:46.841759   27419 retry.go:31] will retry after 762.276747ms: waiting for machine to come up
	I0429 18:59:47.605680   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:47.606106   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:47.606134   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:47.606050   27419 retry.go:31] will retry after 718.412828ms: waiting for machine to come up
	I0429 18:59:48.325846   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:48.326280   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:48.326310   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:48.326230   27419 retry.go:31] will retry after 882.907083ms: waiting for machine to come up
	I0429 18:59:49.210629   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:49.211042   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:49.211065   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:49.211010   27419 retry.go:31] will retry after 1.274425388s: waiting for machine to come up
	I0429 18:59:50.487472   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:50.487829   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:50.487859   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:50.487785   27419 retry.go:31] will retry after 1.613104504s: waiting for machine to come up
	I0429 18:59:52.103213   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:52.103586   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:52.103617   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:52.103571   27419 retry.go:31] will retry after 2.032138772s: waiting for machine to come up
	I0429 18:59:54.137486   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:54.137918   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:54.137946   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:54.137874   27419 retry.go:31] will retry after 2.860217313s: waiting for machine to come up
	I0429 18:59:57.000098   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 18:59:57.000554   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 18:59:57.000591   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 18:59:57.000478   27419 retry.go:31] will retry after 3.364383116s: waiting for machine to come up
	I0429 19:00:00.366964   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:00.367359   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 19:00:00.367385   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 19:00:00.367324   27419 retry.go:31] will retry after 3.364915441s: waiting for machine to come up
	I0429 19:00:03.733964   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:03.734448   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find current IP address of domain ha-058855-m02 in network mk-ha-058855
	I0429 19:00:03.734474   26778 main.go:141] libmachine: (ha-058855-m02) DBG | I0429 19:00:03.734425   27419 retry.go:31] will retry after 4.96010853s: waiting for machine to come up
	I0429 19:00:08.695586   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:08.696062   26778 main.go:141] libmachine: (ha-058855-m02) Found IP for machine: 192.168.39.27
	I0429 19:00:08.696093   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has current primary IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:08.696101   26778 main.go:141] libmachine: (ha-058855-m02) Reserving static IP address...
	I0429 19:00:08.696665   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find host DHCP lease matching {name: "ha-058855-m02", mac: "52:54:00:98:81:20", ip: "192.168.39.27"} in network mk-ha-058855
	I0429 19:00:08.770639   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Getting to WaitForSSH function...
	I0429 19:00:08.770668   26778 main.go:141] libmachine: (ha-058855-m02) Reserved static IP address: 192.168.39.27
	I0429 19:00:08.770680   26778 main.go:141] libmachine: (ha-058855-m02) Waiting for SSH to be available...
	I0429 19:00:08.773095   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:08.773382   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855
	I0429 19:00:08.773414   26778 main.go:141] libmachine: (ha-058855-m02) DBG | unable to find defined IP address of network mk-ha-058855 interface with MAC address 52:54:00:98:81:20
	I0429 19:00:08.773586   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Using SSH client type: external
	I0429 19:00:08.773613   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa (-rw-------)
	I0429 19:00:08.773656   26778 main.go:141] libmachine: (ha-058855-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:00:08.773672   26778 main.go:141] libmachine: (ha-058855-m02) DBG | About to run SSH command:
	I0429 19:00:08.773715   26778 main.go:141] libmachine: (ha-058855-m02) DBG | exit 0
	I0429 19:00:08.777252   26778 main.go:141] libmachine: (ha-058855-m02) DBG | SSH cmd err, output: exit status 255: 
	I0429 19:00:08.777286   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0429 19:00:08.777298   26778 main.go:141] libmachine: (ha-058855-m02) DBG | command : exit 0
	I0429 19:00:08.777306   26778 main.go:141] libmachine: (ha-058855-m02) DBG | err     : exit status 255
	I0429 19:00:08.777317   26778 main.go:141] libmachine: (ha-058855-m02) DBG | output  : 
	I0429 19:00:11.779414   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Getting to WaitForSSH function...
	I0429 19:00:11.781786   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:11.782111   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:11.782155   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:11.782277   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Using SSH client type: external
	I0429 19:00:11.782302   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa (-rw-------)
	I0429 19:00:11.782340   26778 main.go:141] libmachine: (ha-058855-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:00:11.782361   26778 main.go:141] libmachine: (ha-058855-m02) DBG | About to run SSH command:
	I0429 19:00:11.782390   26778 main.go:141] libmachine: (ha-058855-m02) DBG | exit 0
	I0429 19:00:11.915120   26778 main.go:141] libmachine: (ha-058855-m02) DBG | SSH cmd err, output: <nil>: 
	I0429 19:00:11.915290   26778 main.go:141] libmachine: (ha-058855-m02) KVM machine creation complete!
	I0429 19:00:11.915625   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetConfigRaw
	I0429 19:00:11.916168   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:11.916348   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:11.916548   26778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 19:00:11.916565   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:00:11.917861   26778 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 19:00:11.917899   26778 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 19:00:11.917909   26778 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 19:00:11.917921   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:11.919969   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:11.920293   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:11.920317   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:11.920482   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:11.920697   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:11.920833   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:11.920954   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:11.921131   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:00:11.921367   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0429 19:00:11.921385   26778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 19:00:12.037872   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:00:12.037900   26778 main.go:141] libmachine: Detecting the provisioner...
	I0429 19:00:12.037908   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:12.040538   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.040908   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.040953   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.041100   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:12.041312   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.041461   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.041633   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:12.041790   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:00:12.041952   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0429 19:00:12.041965   26778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 19:00:12.159783   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 19:00:12.159873   26778 main.go:141] libmachine: found compatible host: buildroot
	I0429 19:00:12.159890   26778 main.go:141] libmachine: Provisioning with buildroot...
	I0429 19:00:12.159901   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetMachineName
	I0429 19:00:12.160170   26778 buildroot.go:166] provisioning hostname "ha-058855-m02"
	I0429 19:00:12.160198   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetMachineName
	I0429 19:00:12.160380   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:12.162841   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.163184   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.163232   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.163330   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:12.163495   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.163649   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.163763   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:12.163916   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:00:12.164093   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0429 19:00:12.164106   26778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-058855-m02 && echo "ha-058855-m02" | sudo tee /etc/hostname
	I0429 19:00:12.294307   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-058855-m02
	
	I0429 19:00:12.294346   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:12.297012   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.297368   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.297402   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.297565   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:12.297754   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.297888   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.298041   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:12.298207   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:00:12.298414   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0429 19:00:12.298433   26778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-058855-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-058855-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-058855-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:00:12.427831   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:00:12.427863   26778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:00:12.427883   26778 buildroot.go:174] setting up certificates
	I0429 19:00:12.427898   26778 provision.go:84] configureAuth start
	I0429 19:00:12.427914   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetMachineName
	I0429 19:00:12.428188   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:00:12.430891   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.431294   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.431325   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.431457   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:12.433562   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.433989   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.434014   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.434171   26778 provision.go:143] copyHostCerts
	I0429 19:00:12.434198   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:00:12.434230   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:00:12.434245   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:00:12.434321   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:00:12.434406   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:00:12.434425   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:00:12.434434   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:00:12.434458   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:00:12.434545   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:00:12.434575   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:00:12.434583   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:00:12.434609   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:00:12.434666   26778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.ha-058855-m02 san=[127.0.0.1 192.168.39.27 ha-058855-m02 localhost minikube]
	I0429 19:00:12.570018   26778 provision.go:177] copyRemoteCerts
	I0429 19:00:12.570117   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:00:12.570141   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:12.572743   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.573042   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.573072   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.573219   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:12.573405   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.573576   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:12.573695   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	I0429 19:00:12.661515   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 19:00:12.661585   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:00:12.689766   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 19:00:12.689834   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 19:00:12.720381   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 19:00:12.720444   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 19:00:12.749899   26778 provision.go:87] duration metric: took 321.986297ms to configureAuth
	I0429 19:00:12.749929   26778 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:00:12.750132   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:00:12.750202   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:12.752958   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.753340   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:12.753365   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:12.753526   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:12.753732   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.753905   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:12.754047   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:12.754233   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:00:12.754391   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0429 19:00:12.754405   26778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:00:13.064704   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:00:13.064728   26778 main.go:141] libmachine: Checking connection to Docker...
	I0429 19:00:13.064735   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetURL
	I0429 19:00:13.066069   26778 main.go:141] libmachine: (ha-058855-m02) DBG | Using libvirt version 6000000
	I0429 19:00:13.068255   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.068591   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.068622   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.068805   26778 main.go:141] libmachine: Docker is up and running!
	I0429 19:00:13.068821   26778 main.go:141] libmachine: Reticulating splines...
	I0429 19:00:13.068827   26778 client.go:171] duration metric: took 29.612166123s to LocalClient.Create
	I0429 19:00:13.068848   26778 start.go:167] duration metric: took 29.612214179s to libmachine.API.Create "ha-058855"
	I0429 19:00:13.068862   26778 start.go:293] postStartSetup for "ha-058855-m02" (driver="kvm2")
	I0429 19:00:13.068872   26778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:00:13.068898   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:13.069242   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:00:13.069284   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:13.072032   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.072463   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.072489   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.072654   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:13.072802   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:13.072958   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:13.073162   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	I0429 19:00:13.161599   26778 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:00:13.166655   26778 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:00:13.166685   26778 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:00:13.166772   26778 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:00:13.166846   26778 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:00:13.166856   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /etc/ssl/certs/151242.pem
	I0429 19:00:13.166959   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:00:13.177727   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:00:13.206240   26778 start.go:296] duration metric: took 137.364447ms for postStartSetup
	I0429 19:00:13.206288   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetConfigRaw
	I0429 19:00:13.206821   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:00:13.209346   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.209675   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.209706   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.209938   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 19:00:13.210151   26778 start.go:128] duration metric: took 29.77159513s to createHost
	I0429 19:00:13.210175   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:13.212467   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.212802   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.212825   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.212971   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:13.213134   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:13.213283   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:13.213439   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:13.213593   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:00:13.213741   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0429 19:00:13.213751   26778 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:00:13.332585   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714417213.321381911
	
	I0429 19:00:13.332610   26778 fix.go:216] guest clock: 1714417213.321381911
	I0429 19:00:13.332620   26778 fix.go:229] Guest: 2024-04-29 19:00:13.321381911 +0000 UTC Remote: 2024-04-29 19:00:13.210163606 +0000 UTC m=+87.275376480 (delta=111.218305ms)
	I0429 19:00:13.332635   26778 fix.go:200] guest clock delta is within tolerance: 111.218305ms
	I0429 19:00:13.332640   26778 start.go:83] releasing machines lock for "ha-058855-m02", held for 29.89422449s
	I0429 19:00:13.332656   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:13.332892   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:00:13.335629   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.335965   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.335990   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.338552   26778 out.go:177] * Found network options:
	I0429 19:00:13.339978   26778 out.go:177]   - NO_PROXY=192.168.39.52
	W0429 19:00:13.341305   26778 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:00:13.341353   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:13.342010   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:13.342226   26778 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:00:13.342337   26778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:00:13.342375   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	W0429 19:00:13.342462   26778 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:00:13.342552   26778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:00:13.342576   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:00:13.345041   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.345255   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.345456   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.345486   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.345584   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:13.345730   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:13.345740   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:13.345751   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:13.345917   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:00:13.345928   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:13.346177   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	I0429 19:00:13.346238   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:00:13.346375   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:00:13.346536   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	I0429 19:00:13.594081   26778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:00:13.601628   26778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:00:13.601706   26778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:00:13.622685   26778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:00:13.622718   26778 start.go:494] detecting cgroup driver to use...
	I0429 19:00:13.622789   26778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:00:13.641928   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:00:13.657761   26778 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:00:13.657821   26778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:00:13.673744   26778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:00:13.689083   26778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:00:13.828789   26778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:00:13.991339   26778 docker.go:233] disabling docker service ...
	I0429 19:00:13.991432   26778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:00:14.008421   26778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:00:14.022861   26778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:00:14.166301   26778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:00:14.283457   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:00:14.299275   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:00:14.324665   26778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 19:00:14.324726   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.336852   26778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:00:14.336908   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.348539   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.361198   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.373518   26778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:00:14.385123   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.396271   26778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.415744   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:00:14.426968   26778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:00:14.436888   26778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 19:00:14.436949   26778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 19:00:14.453075   26778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:00:14.466893   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:00:14.597651   26778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:00:14.757924   26778 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:00:14.757990   26778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:00:14.763349   26778 start.go:562] Will wait 60s for crictl version
	I0429 19:00:14.763396   26778 ssh_runner.go:195] Run: which crictl
	I0429 19:00:14.767450   26778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:00:14.818781   26778 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:00:14.818869   26778 ssh_runner.go:195] Run: crio --version
	I0429 19:00:14.850335   26778 ssh_runner.go:195] Run: crio --version
	I0429 19:00:14.886670   26778 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 19:00:14.888365   26778 out.go:177]   - env NO_PROXY=192.168.39.52
	I0429 19:00:14.889746   26778 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:00:14.892341   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:14.892741   26778 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:59 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:00:14.892771   26778 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:00:14.892958   26778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 19:00:14.897839   26778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:00:14.912933   26778 mustload.go:65] Loading cluster: ha-058855
	I0429 19:00:14.913133   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:00:14.913423   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:00:14.913460   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:00:14.928295   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39599
	I0429 19:00:14.928783   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:00:14.929310   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:00:14.929336   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:00:14.929633   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:00:14.929869   26778 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:00:14.931253   26778 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:00:14.931550   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:00:14.931582   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:00:14.945834   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45259
	I0429 19:00:14.946287   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:00:14.946699   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:00:14.946738   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:00:14.947037   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:00:14.947206   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:00:14.947377   26778 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855 for IP: 192.168.39.27
	I0429 19:00:14.947395   26778 certs.go:194] generating shared ca certs ...
	I0429 19:00:14.947411   26778 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:00:14.947572   26778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:00:14.947621   26778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:00:14.947636   26778 certs.go:256] generating profile certs ...
	I0429 19:00:14.947749   26778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key
	I0429 19:00:14.947783   26778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.92ecc576
	I0429 19:00:14.947803   26778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.92ecc576 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.52 192.168.39.27 192.168.39.254]
	I0429 19:00:15.294884   26778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.92ecc576 ...
	I0429 19:00:15.294913   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.92ecc576: {Name:mkb034de2f41ca35c303234e6f802403c57586ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:00:15.295107   26778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.92ecc576 ...
	I0429 19:00:15.295125   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.92ecc576: {Name:mkbe37529d1b277fc4a208f5b0f89e39776fabc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:00:15.295230   26778 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.92ecc576 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt
	I0429 19:00:15.295401   26778 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.92ecc576 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key
	I0429 19:00:15.295596   26778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key
	I0429 19:00:15.295619   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:00:15.295639   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:00:15.295657   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:00:15.295677   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:00:15.295697   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:00:15.295710   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:00:15.295724   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:00:15.295736   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:00:15.295785   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:00:15.295814   26778 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:00:15.295824   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:00:15.295844   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:00:15.295866   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:00:15.295921   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:00:15.295967   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:00:15.296012   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /usr/share/ca-certificates/151242.pem
	I0429 19:00:15.296026   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:00:15.296039   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem -> /usr/share/ca-certificates/15124.pem
	I0429 19:00:15.296067   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:00:15.298956   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:00:15.299306   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:00:15.299341   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:00:15.299512   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:00:15.299685   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:00:15.299862   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:00:15.300028   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:00:15.378460   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 19:00:15.385333   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 19:00:15.400757   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 19:00:15.405965   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0429 19:00:15.420574   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 19:00:15.426174   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 19:00:15.439222   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 19:00:15.448132   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0429 19:00:15.464834   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 19:00:15.469819   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 19:00:15.482125   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 19:00:15.487531   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 19:00:15.499235   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:00:15.529046   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:00:15.555896   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:00:15.590035   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:00:15.615905   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0429 19:00:15.643601   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 19:00:15.671197   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:00:15.697565   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 19:00:15.723401   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:00:15.748529   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:00:15.776188   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:00:15.802620   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 19:00:15.820770   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0429 19:00:15.839189   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 19:00:15.858797   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0429 19:00:15.879490   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 19:00:15.901632   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 19:00:15.922300   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 19:00:15.942601   26778 ssh_runner.go:195] Run: openssl version
	I0429 19:00:15.949452   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:00:15.963818   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:00:15.969239   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:00:15.969303   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:00:15.975940   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:00:15.989823   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:00:16.003500   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:00:16.008876   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:00:16.008935   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:00:16.015404   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:00:16.028915   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:00:16.042500   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:00:16.047660   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:00:16.047719   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:00:16.053871   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:00:16.066750   26778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:00:16.071234   26778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:00:16.071277   26778 kubeadm.go:928] updating node {m02 192.168.39.27 8443 v1.30.0 crio true true} ...
	I0429 19:00:16.071353   26778 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-058855-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:00:16.071377   26778 kube-vip.go:115] generating kube-vip config ...
	I0429 19:00:16.071407   26778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 19:00:16.089480   26778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 19:00:16.089553   26778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 19:00:16.089613   26778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:00:16.100670   26778 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 19:00:16.100725   26778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 19:00:16.111891   26778 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 19:00:16.111911   26778 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0429 19:00:16.111918   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:00:16.111918   26778 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0429 19:00:16.111989   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:00:16.118185   26778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 19:00:16.118221   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 19:00:50.972503   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:00:50.972586   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:00:50.978615   26778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 19:00:50.978656   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 19:01:25.243523   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:01:25.262125   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:01:25.262248   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:01:25.267598   26778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 19:01:25.267637   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 19:01:25.744249   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 19:01:25.756367   26778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0429 19:01:25.776461   26778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:01:25.795913   26778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0429 19:01:25.815558   26778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 19:01:25.820116   26778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:01:25.835422   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:01:25.986295   26778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:01:26.006270   26778 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:01:26.006725   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:01:26.006777   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:01:26.021972   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I0429 19:01:26.022416   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:01:26.022919   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:01:26.022940   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:01:26.023318   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:01:26.023514   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:01:26.023650   26778 start.go:316] joinCluster: &{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:01:26.023758   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 19:01:26.023781   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:01:26.027191   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:01:26.027634   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:01:26.027664   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:01:26.027807   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:01:26.027976   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:01:26.028166   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:01:26.028361   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:01:26.227534   26778 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:01:26.227586   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token y25o3g.nddjkwofticnjyl8 --discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-058855-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443"
	I0429 19:01:50.314194   26778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token y25o3g.nddjkwofticnjyl8 --discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-058855-m02 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443": (24.086581898s)
	I0429 19:01:50.314231   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 19:01:50.976596   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-058855-m02 minikube.k8s.io/updated_at=2024_04_29T19_01_50_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-058855 minikube.k8s.io/primary=false
	I0429 19:01:51.156612   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-058855-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 19:01:51.320453   26778 start.go:318] duration metric: took 25.29679628s to joinCluster
	I0429 19:01:51.320565   26778 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:01:51.321995   26778 out.go:177] * Verifying Kubernetes components...
	I0429 19:01:51.320820   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:01:51.323237   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:01:51.613558   26778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:01:51.646971   26778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:01:51.647408   26778 kapi.go:59] client config for ha-058855: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.crt", KeyFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key", CAFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 19:01:51.647498   26778 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.52:8443
	I0429 19:01:51.647802   26778 node_ready.go:35] waiting up to 6m0s for node "ha-058855-m02" to be "Ready" ...
	I0429 19:01:51.647944   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:51.647957   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:51.647969   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:51.647980   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:51.660139   26778 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 19:01:52.148893   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:52.148914   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:52.148921   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:52.148925   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:52.152921   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:52.648346   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:52.648373   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:52.648383   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:52.648388   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:52.682535   26778 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0429 19:01:53.148627   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:53.148653   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:53.148666   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:53.148683   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:53.152363   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:53.648121   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:53.648146   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:53.648158   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:53.648166   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:53.652774   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:53.653410   26778 node_ready.go:53] node "ha-058855-m02" has status "Ready":"False"
	I0429 19:01:54.148355   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:54.148378   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:54.148390   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:54.148397   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:54.153047   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:54.648266   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:54.648287   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:54.648294   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:54.648299   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:54.652522   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:55.148546   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:55.148582   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:55.148590   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:55.148596   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:55.152445   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:55.648760   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:55.648780   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:55.648788   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:55.648792   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:55.652343   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:56.148037   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:56.148070   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:56.148093   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:56.148099   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:56.152206   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:56.152907   26778 node_ready.go:53] node "ha-058855-m02" has status "Ready":"False"
	I0429 19:01:56.648187   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:56.648210   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:56.648219   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:56.648224   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:56.652324   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:57.148479   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:57.148504   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:57.148516   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:57.148524   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:57.155633   26778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:01:57.648853   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:57.648877   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:57.648885   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:57.648890   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:57.653182   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:58.148268   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:58.148292   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:58.148320   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:58.148324   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:58.152128   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:58.648691   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:58.648713   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:58.648722   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:58.648726   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:58.652637   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:58.653456   26778 node_ready.go:53] node "ha-058855-m02" has status "Ready":"False"
	I0429 19:01:59.148550   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:59.148580   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.148592   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.148610   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.153627   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:59.155026   26778 node_ready.go:49] node "ha-058855-m02" has status "Ready":"True"
	I0429 19:01:59.155052   26778 node_ready.go:38] duration metric: took 7.507203783s for node "ha-058855-m02" to be "Ready" ...
	I0429 19:01:59.155064   26778 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:01:59.155159   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:01:59.155173   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.155183   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.155189   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.161694   26778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:01:59.170047   26778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bbq9x" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.170148   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bbq9x
	I0429 19:01:59.170160   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.170167   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.170172   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.173879   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:59.174619   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:01:59.174637   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.174644   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.174648   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.179644   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:01:59.180979   26778 pod_ready.go:92] pod "coredns-7db6d8ff4d-bbq9x" in "kube-system" namespace has status "Ready":"True"
	I0429 19:01:59.180996   26778 pod_ready.go:81] duration metric: took 10.912717ms for pod "coredns-7db6d8ff4d-bbq9x" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.181005   26778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-njch8" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.181058   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-njch8
	I0429 19:01:59.181068   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.181080   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.181090   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.183986   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:01:59.184686   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:01:59.184701   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.184708   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.184712   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.187897   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:59.188624   26778 pod_ready.go:92] pod "coredns-7db6d8ff4d-njch8" in "kube-system" namespace has status "Ready":"True"
	I0429 19:01:59.188645   26778 pod_ready.go:81] duration metric: took 7.633481ms for pod "coredns-7db6d8ff4d-njch8" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.188658   26778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.188725   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855
	I0429 19:01:59.188737   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.188746   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.188756   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.191528   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:01:59.192263   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:01:59.192280   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.192287   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.192290   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.194727   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:01:59.195387   26778 pod_ready.go:92] pod "etcd-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:01:59.195407   26778 pod_ready.go:81] duration metric: took 6.741642ms for pod "etcd-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.195415   26778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:01:59.195460   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:01:59.195467   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.195474   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.195480   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.198388   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:01:59.199048   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:59.199063   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.199070   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.199074   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.201636   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:01:59.695652   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:01:59.695677   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.695685   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.695689   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.699159   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:01:59.699883   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:01:59.699901   26778 round_trippers.go:469] Request Headers:
	I0429 19:01:59.699912   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:01:59.699919   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:01:59.702751   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:00.195579   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:00.195604   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:00.195614   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:00.195620   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:00.201107   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:02:00.202014   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:00.202029   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:00.202036   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:00.202040   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:00.205410   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:00.696380   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:00.696405   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:00.696413   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:00.696416   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:00.700536   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:00.701517   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:00.701536   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:00.701544   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:00.701547   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:00.704698   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:01.195911   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:01.195940   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:01.195951   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:01.195956   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:01.200235   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:01.201183   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:01.201199   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:01.201204   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:01.201208   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:01.204245   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:01.205086   26778 pod_ready.go:102] pod "etcd-ha-058855-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 19:02:01.696448   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:01.696470   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:01.696477   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:01.696482   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:01.703231   26778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:02:01.704706   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:01.704723   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:01.704739   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:01.704747   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:01.707602   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:02.195798   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:02.195830   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:02.195839   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:02.195844   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:02.200740   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:02.201671   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:02.201687   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:02.201694   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:02.201699   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:02.205319   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:02.696277   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:02.696300   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:02.696308   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:02.696313   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:02.701170   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:02.702156   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:02.702171   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:02.702178   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:02.702183   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:02.705439   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:03.195582   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:03.195612   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:03.195626   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:03.195634   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:03.199601   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:03.200597   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:03.200613   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:03.200620   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:03.200626   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:03.203604   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:03.696082   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:03.696102   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:03.696111   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:03.696118   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:03.700252   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:03.701074   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:03.701089   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:03.701100   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:03.701106   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:03.703913   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:03.704576   26778 pod_ready.go:102] pod "etcd-ha-058855-m02" in "kube-system" namespace has status "Ready":"False"
	I0429 19:02:04.196005   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:04.196033   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:04.196040   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:04.196044   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:04.199615   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:04.200609   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:04.200624   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:04.200632   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:04.200636   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:04.203723   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:04.695662   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:04.695687   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:04.695697   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:04.695702   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:04.699413   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:04.700424   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:04.700439   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:04.700447   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:04.700453   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:04.703708   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.195698   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:05.195723   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.195734   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.195743   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.199593   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.200231   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:05.200250   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.200260   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.200265   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.203686   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.696469   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:02:05.696498   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.696509   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.696514   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.701346   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:05.702785   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:05.702800   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.702807   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.702810   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.706019   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.706675   26778 pod_ready.go:92] pod "etcd-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:05.706701   26778 pod_ready.go:81] duration metric: took 6.511279394s for pod "etcd-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.706713   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.706763   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-058855
	I0429 19:02:05.706770   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.706777   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.706780   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.710127   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.711122   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:02:05.711141   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.711148   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.711152   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.715021   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.715759   26778 pod_ready.go:92] pod "kube-apiserver-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:05.715781   26778 pod_ready.go:81] duration metric: took 9.06116ms for pod "kube-apiserver-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.715793   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.715851   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-058855-m02
	I0429 19:02:05.715858   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.715869   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.715875   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.718816   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:05.719499   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:05.719514   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.719519   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.719522   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.721882   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:05.722413   26778 pod_ready.go:92] pod "kube-apiserver-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:05.722429   26778 pod_ready.go:81] duration metric: took 6.62945ms for pod "kube-apiserver-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.722438   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.722480   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855
	I0429 19:02:05.722488   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.722494   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.722499   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.725036   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:05.725874   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:02:05.725889   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.725899   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.725907   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.728522   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:05.729112   26778 pod_ready.go:92] pod "kube-controller-manager-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:05.729130   26778 pod_ready.go:81] duration metric: took 6.685135ms for pod "kube-controller-manager-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.729142   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.749493   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855-m02
	I0429 19:02:05.749525   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.749535   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.749541   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.753178   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.948977   26778 request.go:629] Waited for 194.998438ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:05.949032   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:05.949037   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:05.949045   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:05.949049   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:05.952834   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:05.953786   26778 pod_ready.go:92] pod "kube-controller-manager-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:05.953810   26778 pod_ready.go:81] duration metric: took 224.658701ms for pod "kube-controller-manager-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:05.953824   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nz2rv" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:06.149131   26778 request.go:629] Waited for 195.235479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz2rv
	I0429 19:02:06.149204   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz2rv
	I0429 19:02:06.149211   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:06.149222   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:06.149230   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:06.156135   26778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:02:06.349304   26778 request.go:629] Waited for 192.378541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:06.349377   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:06.349382   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:06.349389   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:06.349394   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:06.353882   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:06.354770   26778 pod_ready.go:92] pod "kube-proxy-nz2rv" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:06.354787   26778 pod_ready.go:81] duration metric: took 400.955401ms for pod "kube-proxy-nz2rv" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:06.354796   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xldlc" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:06.548928   26778 request.go:629] Waited for 194.054332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xldlc
	I0429 19:02:06.548990   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xldlc
	I0429 19:02:06.548996   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:06.549004   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:06.549007   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:06.552981   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:06.749232   26778 request.go:629] Waited for 195.3669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:02:06.749312   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:02:06.749323   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:06.749333   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:06.749342   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:06.753364   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:06.754015   26778 pod_ready.go:92] pod "kube-proxy-xldlc" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:06.754034   26778 pod_ready.go:81] duration metric: took 399.232401ms for pod "kube-proxy-xldlc" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:06.754043   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:06.949226   26778 request.go:629] Waited for 195.086098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855
	I0429 19:02:06.949283   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855
	I0429 19:02:06.949288   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:06.949294   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:06.949297   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:06.952920   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:07.148961   26778 request.go:629] Waited for 195.205382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:02:07.149012   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:02:07.149018   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:07.149028   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:07.149035   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:07.153185   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:07.154248   26778 pod_ready.go:92] pod "kube-scheduler-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:07.154271   26778 pod_ready.go:81] duration metric: took 400.222276ms for pod "kube-scheduler-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:07.154281   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:07.349255   26778 request.go:629] Waited for 194.918313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855-m02
	I0429 19:02:07.349345   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855-m02
	I0429 19:02:07.349357   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:07.349368   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:07.349377   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:07.353193   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:07.549268   26778 request.go:629] Waited for 195.21446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:07.549331   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:02:07.549336   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:07.549343   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:07.549348   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:07.552812   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:02:07.553419   26778 pod_ready.go:92] pod "kube-scheduler-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:02:07.553439   26778 pod_ready.go:81] duration metric: took 399.150386ms for pod "kube-scheduler-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:02:07.553449   26778 pod_ready.go:38] duration metric: took 8.398363668s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:02:07.553468   26778 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:02:07.553523   26778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:02:07.574578   26778 api_server.go:72] duration metric: took 16.253972211s to wait for apiserver process to appear ...
	I0429 19:02:07.574610   26778 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:02:07.574634   26778 api_server.go:253] Checking apiserver healthz at https://192.168.39.52:8443/healthz ...
	I0429 19:02:07.582917   26778 api_server.go:279] https://192.168.39.52:8443/healthz returned 200:
	ok
	I0429 19:02:07.582991   26778 round_trippers.go:463] GET https://192.168.39.52:8443/version
	I0429 19:02:07.582998   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:07.583008   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:07.583013   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:07.585045   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:02:07.585480   26778 api_server.go:141] control plane version: v1.30.0
	I0429 19:02:07.585507   26778 api_server.go:131] duration metric: took 10.887919ms to wait for apiserver health ...
	I0429 19:02:07.585517   26778 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:02:07.748891   26778 request.go:629] Waited for 163.29562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:02:07.748977   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:02:07.748983   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:07.748990   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:07.748999   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:07.755331   26778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:02:07.761345   26778 system_pods.go:59] 17 kube-system pods found
	I0429 19:02:07.761393   26778 system_pods.go:61] "coredns-7db6d8ff4d-bbq9x" [a016fbf8-4a91-4f2f-97da-44b6e2195885] Running
	I0429 19:02:07.761402   26778 system_pods.go:61] "coredns-7db6d8ff4d-njch8" [823d223d-f7bd-4b9c-bdd9-8d0ae063d449] Running
	I0429 19:02:07.761412   26778 system_pods.go:61] "etcd-ha-058855" [a7e579b9-771a-4bb2-819b-a98848f52b09] Running
	I0429 19:02:07.761418   26778 system_pods.go:61] "etcd-ha-058855-m02" [08e98635-58d8-460b-9432-4bb03c74099c] Running
	I0429 19:02:07.761426   26778 system_pods.go:61] "kindnet-j42cd" [13d10343-b59f-490f-ac7c-973271cc27d2] Running
	I0429 19:02:07.761431   26778 system_pods.go:61] "kindnet-xdtp4" [510a69a6-5bd3-44ba-a81f-6d35a38b6ad2] Running
	I0429 19:02:07.761437   26778 system_pods.go:61] "kube-apiserver-ha-058855" [d2eb7bde-88b9-4366-be20-593097820579] Running
	I0429 19:02:07.761440   26778 system_pods.go:61] "kube-apiserver-ha-058855-m02" [94599f7a-b9de-4db3-b858-a380793bbd34] Running
	I0429 19:02:07.761444   26778 system_pods.go:61] "kube-controller-manager-ha-058855" [56527f4a-57d1-4a44-be01-7747abcbfce0] Running
	I0429 19:02:07.761448   26778 system_pods.go:61] "kube-controller-manager-ha-058855-m02" [201796e2-157c-40ce-bf68-c2472bab9e3a] Running
	I0429 19:02:07.761451   26778 system_pods.go:61] "kube-proxy-nz2rv" [32002a66-d55f-4011-bb78-c4c6e35238b3] Running
	I0429 19:02:07.761455   26778 system_pods.go:61] "kube-proxy-xldlc" [a01564cb-ea76-4cc5-abad-d2d70b79bf6d] Running
	I0429 19:02:07.761458   26778 system_pods.go:61] "kube-scheduler-ha-058855" [d71e876d-d5be-4671-924b-3fd828de92a1] Running
	I0429 19:02:07.761461   26778 system_pods.go:61] "kube-scheduler-ha-058855-m02" [69bbddf9-e5f6-4ede-abd0-762b0642fda4] Running
	I0429 19:02:07.761465   26778 system_pods.go:61] "kube-vip-ha-058855" [76e512c7-e0ea-417e-8239-63bb073dc04d] Running
	I0429 19:02:07.761468   26778 system_pods.go:61] "kube-vip-ha-058855-m02" [1569a60d-d6a1-4685-8405-689270322b97] Running
	I0429 19:02:07.761470   26778 system_pods.go:61] "storage-provisioner" [1572f7da-1bda-4b9e-a5fc-315aae3ba592] Running
	I0429 19:02:07.761476   26778 system_pods.go:74] duration metric: took 175.953408ms to wait for pod list to return data ...
	I0429 19:02:07.761487   26778 default_sa.go:34] waiting for default service account to be created ...
	I0429 19:02:07.948926   26778 request.go:629] Waited for 187.333923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:02:07.948993   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:02:07.948998   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:07.949005   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:07.949011   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:07.953595   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:02:07.953854   26778 default_sa.go:45] found service account: "default"
	I0429 19:02:07.953875   26778 default_sa.go:55] duration metric: took 192.380789ms for default service account to be created ...
	I0429 19:02:07.953892   26778 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 19:02:08.149354   26778 request.go:629] Waited for 195.395764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:02:08.149418   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:02:08.149425   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:08.149435   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:08.149443   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:08.157416   26778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:02:08.163248   26778 system_pods.go:86] 17 kube-system pods found
	I0429 19:02:08.163275   26778 system_pods.go:89] "coredns-7db6d8ff4d-bbq9x" [a016fbf8-4a91-4f2f-97da-44b6e2195885] Running
	I0429 19:02:08.163280   26778 system_pods.go:89] "coredns-7db6d8ff4d-njch8" [823d223d-f7bd-4b9c-bdd9-8d0ae063d449] Running
	I0429 19:02:08.163285   26778 system_pods.go:89] "etcd-ha-058855" [a7e579b9-771a-4bb2-819b-a98848f52b09] Running
	I0429 19:02:08.163289   26778 system_pods.go:89] "etcd-ha-058855-m02" [08e98635-58d8-460b-9432-4bb03c74099c] Running
	I0429 19:02:08.163293   26778 system_pods.go:89] "kindnet-j42cd" [13d10343-b59f-490f-ac7c-973271cc27d2] Running
	I0429 19:02:08.163297   26778 system_pods.go:89] "kindnet-xdtp4" [510a69a6-5bd3-44ba-a81f-6d35a38b6ad2] Running
	I0429 19:02:08.163301   26778 system_pods.go:89] "kube-apiserver-ha-058855" [d2eb7bde-88b9-4366-be20-593097820579] Running
	I0429 19:02:08.163305   26778 system_pods.go:89] "kube-apiserver-ha-058855-m02" [94599f7a-b9de-4db3-b858-a380793bbd34] Running
	I0429 19:02:08.163309   26778 system_pods.go:89] "kube-controller-manager-ha-058855" [56527f4a-57d1-4a44-be01-7747abcbfce0] Running
	I0429 19:02:08.163313   26778 system_pods.go:89] "kube-controller-manager-ha-058855-m02" [201796e2-157c-40ce-bf68-c2472bab9e3a] Running
	I0429 19:02:08.163319   26778 system_pods.go:89] "kube-proxy-nz2rv" [32002a66-d55f-4011-bb78-c4c6e35238b3] Running
	I0429 19:02:08.163323   26778 system_pods.go:89] "kube-proxy-xldlc" [a01564cb-ea76-4cc5-abad-d2d70b79bf6d] Running
	I0429 19:02:08.163328   26778 system_pods.go:89] "kube-scheduler-ha-058855" [d71e876d-d5be-4671-924b-3fd828de92a1] Running
	I0429 19:02:08.163333   26778 system_pods.go:89] "kube-scheduler-ha-058855-m02" [69bbddf9-e5f6-4ede-abd0-762b0642fda4] Running
	I0429 19:02:08.163338   26778 system_pods.go:89] "kube-vip-ha-058855" [76e512c7-e0ea-417e-8239-63bb073dc04d] Running
	I0429 19:02:08.163342   26778 system_pods.go:89] "kube-vip-ha-058855-m02" [1569a60d-d6a1-4685-8405-689270322b97] Running
	I0429 19:02:08.163348   26778 system_pods.go:89] "storage-provisioner" [1572f7da-1bda-4b9e-a5fc-315aae3ba592] Running
	I0429 19:02:08.163355   26778 system_pods.go:126] duration metric: took 209.454349ms to wait for k8s-apps to be running ...
	I0429 19:02:08.163369   26778 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 19:02:08.163413   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:02:08.179889   26778 system_svc.go:56] duration metric: took 16.512589ms WaitForService to wait for kubelet
	I0429 19:02:08.179921   26778 kubeadm.go:576] duration metric: took 16.859320064s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:02:08.179940   26778 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:02:08.349388   26778 request.go:629] Waited for 169.36317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes
	I0429 19:02:08.349475   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes
	I0429 19:02:08.349482   26778 round_trippers.go:469] Request Headers:
	I0429 19:02:08.349493   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:02:08.349511   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:02:08.354796   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:02:08.355527   26778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:02:08.355551   26778 node_conditions.go:123] node cpu capacity is 2
	I0429 19:02:08.355568   26778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:02:08.355573   26778 node_conditions.go:123] node cpu capacity is 2
	I0429 19:02:08.355589   26778 node_conditions.go:105] duration metric: took 175.640559ms to run NodePressure ...
	I0429 19:02:08.355604   26778 start.go:240] waiting for startup goroutines ...
	I0429 19:02:08.355639   26778 start.go:254] writing updated cluster config ...
	I0429 19:02:08.357710   26778 out.go:177] 
	I0429 19:02:08.359265   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:02:08.359376   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 19:02:08.361100   26778 out.go:177] * Starting "ha-058855-m03" control-plane node in "ha-058855" cluster
	I0429 19:02:08.362355   26778 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:02:08.362385   26778 cache.go:56] Caching tarball of preloaded images
	I0429 19:02:08.362500   26778 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 19:02:08.362513   26778 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 19:02:08.362613   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 19:02:08.362808   26778 start.go:360] acquireMachinesLock for ha-058855-m03: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:02:08.362872   26778 start.go:364] duration metric: took 41.606µs to acquireMachinesLock for "ha-058855-m03"
	I0429 19:02:08.362897   26778 start.go:93] Provisioning new machine with config: &{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:02:08.363007   26778 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0429 19:02:08.364585   26778 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 19:02:08.364702   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:02:08.364749   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:02:08.379686   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45739
	I0429 19:02:08.380148   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:02:08.380572   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:02:08.380594   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:02:08.380985   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:02:08.381208   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetMachineName
	I0429 19:02:08.381371   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:08.381582   26778 start.go:159] libmachine.API.Create for "ha-058855" (driver="kvm2")
	I0429 19:02:08.381617   26778 client.go:168] LocalClient.Create starting
	I0429 19:02:08.381660   26778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 19:02:08.381702   26778 main.go:141] libmachine: Decoding PEM data...
	I0429 19:02:08.381724   26778 main.go:141] libmachine: Parsing certificate...
	I0429 19:02:08.381788   26778 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 19:02:08.381816   26778 main.go:141] libmachine: Decoding PEM data...
	I0429 19:02:08.381829   26778 main.go:141] libmachine: Parsing certificate...
	I0429 19:02:08.381855   26778 main.go:141] libmachine: Running pre-create checks...
	I0429 19:02:08.381866   26778 main.go:141] libmachine: (ha-058855-m03) Calling .PreCreateCheck
	I0429 19:02:08.382040   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetConfigRaw
	I0429 19:02:08.382510   26778 main.go:141] libmachine: Creating machine...
	I0429 19:02:08.382529   26778 main.go:141] libmachine: (ha-058855-m03) Calling .Create
	I0429 19:02:08.382664   26778 main.go:141] libmachine: (ha-058855-m03) Creating KVM machine...
	I0429 19:02:08.384200   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found existing default KVM network
	I0429 19:02:08.384300   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found existing private KVM network mk-ha-058855
	I0429 19:02:08.384458   26778 main.go:141] libmachine: (ha-058855-m03) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03 ...
	I0429 19:02:08.384489   26778 main.go:141] libmachine: (ha-058855-m03) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 19:02:08.384545   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:08.384452   28843 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:02:08.384682   26778 main.go:141] libmachine: (ha-058855-m03) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 19:02:08.613282   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:08.613105   28843 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa...
	I0429 19:02:08.790681   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:08.790569   28843 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/ha-058855-m03.rawdisk...
	I0429 19:02:08.790712   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Writing magic tar header
	I0429 19:02:08.790728   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Writing SSH key tar header
	I0429 19:02:08.790829   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:08.790771   28843 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03 ...
	I0429 19:02:08.790928   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03
	I0429 19:02:08.790947   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 19:02:08.790956   26778 main.go:141] libmachine: (ha-058855-m03) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03 (perms=drwx------)
	I0429 19:02:08.790963   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:02:08.790973   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 19:02:08.790982   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 19:02:08.790990   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home/jenkins
	I0429 19:02:08.790997   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Checking permissions on dir: /home
	I0429 19:02:08.791004   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Skipping /home - not owner
	I0429 19:02:08.791015   26778 main.go:141] libmachine: (ha-058855-m03) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 19:02:08.791024   26778 main.go:141] libmachine: (ha-058855-m03) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 19:02:08.791033   26778 main.go:141] libmachine: (ha-058855-m03) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 19:02:08.791042   26778 main.go:141] libmachine: (ha-058855-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 19:02:08.791049   26778 main.go:141] libmachine: (ha-058855-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 19:02:08.791056   26778 main.go:141] libmachine: (ha-058855-m03) Creating domain...
	I0429 19:02:08.792048   26778 main.go:141] libmachine: (ha-058855-m03) define libvirt domain using xml: 
	I0429 19:02:08.792077   26778 main.go:141] libmachine: (ha-058855-m03) <domain type='kvm'>
	I0429 19:02:08.792099   26778 main.go:141] libmachine: (ha-058855-m03)   <name>ha-058855-m03</name>
	I0429 19:02:08.792117   26778 main.go:141] libmachine: (ha-058855-m03)   <memory unit='MiB'>2200</memory>
	I0429 19:02:08.792140   26778 main.go:141] libmachine: (ha-058855-m03)   <vcpu>2</vcpu>
	I0429 19:02:08.792157   26778 main.go:141] libmachine: (ha-058855-m03)   <features>
	I0429 19:02:08.792162   26778 main.go:141] libmachine: (ha-058855-m03)     <acpi/>
	I0429 19:02:08.792167   26778 main.go:141] libmachine: (ha-058855-m03)     <apic/>
	I0429 19:02:08.792172   26778 main.go:141] libmachine: (ha-058855-m03)     <pae/>
	I0429 19:02:08.792177   26778 main.go:141] libmachine: (ha-058855-m03)     
	I0429 19:02:08.792186   26778 main.go:141] libmachine: (ha-058855-m03)   </features>
	I0429 19:02:08.792198   26778 main.go:141] libmachine: (ha-058855-m03)   <cpu mode='host-passthrough'>
	I0429 19:02:08.792213   26778 main.go:141] libmachine: (ha-058855-m03)   
	I0429 19:02:08.792223   26778 main.go:141] libmachine: (ha-058855-m03)   </cpu>
	I0429 19:02:08.792229   26778 main.go:141] libmachine: (ha-058855-m03)   <os>
	I0429 19:02:08.792238   26778 main.go:141] libmachine: (ha-058855-m03)     <type>hvm</type>
	I0429 19:02:08.792256   26778 main.go:141] libmachine: (ha-058855-m03)     <boot dev='cdrom'/>
	I0429 19:02:08.792273   26778 main.go:141] libmachine: (ha-058855-m03)     <boot dev='hd'/>
	I0429 19:02:08.792288   26778 main.go:141] libmachine: (ha-058855-m03)     <bootmenu enable='no'/>
	I0429 19:02:08.792297   26778 main.go:141] libmachine: (ha-058855-m03)   </os>
	I0429 19:02:08.792308   26778 main.go:141] libmachine: (ha-058855-m03)   <devices>
	I0429 19:02:08.792327   26778 main.go:141] libmachine: (ha-058855-m03)     <disk type='file' device='cdrom'>
	I0429 19:02:08.792348   26778 main.go:141] libmachine: (ha-058855-m03)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/boot2docker.iso'/>
	I0429 19:02:08.792365   26778 main.go:141] libmachine: (ha-058855-m03)       <target dev='hdc' bus='scsi'/>
	I0429 19:02:08.792378   26778 main.go:141] libmachine: (ha-058855-m03)       <readonly/>
	I0429 19:02:08.792389   26778 main.go:141] libmachine: (ha-058855-m03)     </disk>
	I0429 19:02:08.792412   26778 main.go:141] libmachine: (ha-058855-m03)     <disk type='file' device='disk'>
	I0429 19:02:08.792427   26778 main.go:141] libmachine: (ha-058855-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 19:02:08.792443   26778 main.go:141] libmachine: (ha-058855-m03)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/ha-058855-m03.rawdisk'/>
	I0429 19:02:08.792457   26778 main.go:141] libmachine: (ha-058855-m03)       <target dev='hda' bus='virtio'/>
	I0429 19:02:08.792468   26778 main.go:141] libmachine: (ha-058855-m03)     </disk>
	I0429 19:02:08.792481   26778 main.go:141] libmachine: (ha-058855-m03)     <interface type='network'>
	I0429 19:02:08.792492   26778 main.go:141] libmachine: (ha-058855-m03)       <source network='mk-ha-058855'/>
	I0429 19:02:08.792502   26778 main.go:141] libmachine: (ha-058855-m03)       <model type='virtio'/>
	I0429 19:02:08.792513   26778 main.go:141] libmachine: (ha-058855-m03)     </interface>
	I0429 19:02:08.792533   26778 main.go:141] libmachine: (ha-058855-m03)     <interface type='network'>
	I0429 19:02:08.792555   26778 main.go:141] libmachine: (ha-058855-m03)       <source network='default'/>
	I0429 19:02:08.792567   26778 main.go:141] libmachine: (ha-058855-m03)       <model type='virtio'/>
	I0429 19:02:08.792592   26778 main.go:141] libmachine: (ha-058855-m03)     </interface>
	I0429 19:02:08.792613   26778 main.go:141] libmachine: (ha-058855-m03)     <serial type='pty'>
	I0429 19:02:08.792624   26778 main.go:141] libmachine: (ha-058855-m03)       <target port='0'/>
	I0429 19:02:08.792640   26778 main.go:141] libmachine: (ha-058855-m03)     </serial>
	I0429 19:02:08.792657   26778 main.go:141] libmachine: (ha-058855-m03)     <console type='pty'>
	I0429 19:02:08.792679   26778 main.go:141] libmachine: (ha-058855-m03)       <target type='serial' port='0'/>
	I0429 19:02:08.792694   26778 main.go:141] libmachine: (ha-058855-m03)     </console>
	I0429 19:02:08.792703   26778 main.go:141] libmachine: (ha-058855-m03)     <rng model='virtio'>
	I0429 19:02:08.792715   26778 main.go:141] libmachine: (ha-058855-m03)       <backend model='random'>/dev/random</backend>
	I0429 19:02:08.792727   26778 main.go:141] libmachine: (ha-058855-m03)     </rng>
	I0429 19:02:08.792739   26778 main.go:141] libmachine: (ha-058855-m03)     
	I0429 19:02:08.792751   26778 main.go:141] libmachine: (ha-058855-m03)     
	I0429 19:02:08.792762   26778 main.go:141] libmachine: (ha-058855-m03)   </devices>
	I0429 19:02:08.792774   26778 main.go:141] libmachine: (ha-058855-m03) </domain>
	I0429 19:02:08.792783   26778 main.go:141] libmachine: (ha-058855-m03) 
	I0429 19:02:08.799685   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:e5:cf:5c in network default
	I0429 19:02:08.800324   26778 main.go:141] libmachine: (ha-058855-m03) Ensuring networks are active...
	I0429 19:02:08.800341   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:08.801029   26778 main.go:141] libmachine: (ha-058855-m03) Ensuring network default is active
	I0429 19:02:08.801344   26778 main.go:141] libmachine: (ha-058855-m03) Ensuring network mk-ha-058855 is active
	I0429 19:02:08.801736   26778 main.go:141] libmachine: (ha-058855-m03) Getting domain xml...
	I0429 19:02:08.802442   26778 main.go:141] libmachine: (ha-058855-m03) Creating domain...
	I0429 19:02:10.035797   26778 main.go:141] libmachine: (ha-058855-m03) Waiting to get IP...
	I0429 19:02:10.036693   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:10.037215   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:10.037275   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:10.037211   28843 retry.go:31] will retry after 205.30777ms: waiting for machine to come up
	I0429 19:02:10.244541   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:10.245019   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:10.245048   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:10.244956   28843 retry.go:31] will retry after 360.234026ms: waiting for machine to come up
	I0429 19:02:10.606436   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:10.606889   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:10.606922   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:10.606815   28843 retry.go:31] will retry after 331.023484ms: waiting for machine to come up
	I0429 19:02:10.939402   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:10.939850   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:10.939872   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:10.939820   28843 retry.go:31] will retry after 374.808223ms: waiting for machine to come up
	I0429 19:02:11.316070   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:11.316490   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:11.316522   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:11.316429   28843 retry.go:31] will retry after 738.608974ms: waiting for machine to come up
	I0429 19:02:12.056259   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:12.056713   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:12.056753   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:12.056663   28843 retry.go:31] will retry after 651.218996ms: waiting for machine to come up
	I0429 19:02:12.708916   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:12.709538   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:12.709595   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:12.709483   28843 retry.go:31] will retry after 1.03070831s: waiting for machine to come up
	I0429 19:02:13.742455   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:13.742918   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:13.742947   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:13.742885   28843 retry.go:31] will retry after 1.458077686s: waiting for machine to come up
	I0429 19:02:15.203432   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:15.203828   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:15.203874   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:15.203783   28843 retry.go:31] will retry after 1.838914254s: waiting for machine to come up
	I0429 19:02:17.044416   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:17.044802   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:17.044826   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:17.044759   28843 retry.go:31] will retry after 1.717712909s: waiting for machine to come up
	I0429 19:02:18.764219   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:18.764743   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:18.764820   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:18.764760   28843 retry.go:31] will retry after 2.395935751s: waiting for machine to come up
	I0429 19:02:21.163089   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:21.163488   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:21.163520   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:21.163440   28843 retry.go:31] will retry after 3.531379998s: waiting for machine to come up
	I0429 19:02:24.696789   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:24.697155   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:24.697182   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:24.697111   28843 retry.go:31] will retry after 3.999554375s: waiting for machine to come up
	I0429 19:02:28.698037   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:28.698491   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find current IP address of domain ha-058855-m03 in network mk-ha-058855
	I0429 19:02:28.698521   26778 main.go:141] libmachine: (ha-058855-m03) DBG | I0429 19:02:28.698441   28843 retry.go:31] will retry after 4.45435299s: waiting for machine to come up
	I0429 19:02:33.155149   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:33.155672   26778 main.go:141] libmachine: (ha-058855-m03) Found IP for machine: 192.168.39.215
	I0429 19:02:33.155695   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has current primary IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:33.155704   26778 main.go:141] libmachine: (ha-058855-m03) Reserving static IP address...
	I0429 19:02:33.156035   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find host DHCP lease matching {name: "ha-058855-m03", mac: "52:54:00:78:23:56", ip: "192.168.39.215"} in network mk-ha-058855
	I0429 19:02:33.230932   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Getting to WaitForSSH function...
	I0429 19:02:33.230964   26778 main.go:141] libmachine: (ha-058855-m03) Reserved static IP address: 192.168.39.215
	I0429 19:02:33.230979   26778 main.go:141] libmachine: (ha-058855-m03) Waiting for SSH to be available...
	I0429 19:02:33.233825   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:33.234284   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855
	I0429 19:02:33.234315   26778 main.go:141] libmachine: (ha-058855-m03) DBG | unable to find defined IP address of network mk-ha-058855 interface with MAC address 52:54:00:78:23:56
	I0429 19:02:33.234471   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Using SSH client type: external
	I0429 19:02:33.234492   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa (-rw-------)
	I0429 19:02:33.234521   26778 main.go:141] libmachine: (ha-058855-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:02:33.234534   26778 main.go:141] libmachine: (ha-058855-m03) DBG | About to run SSH command:
	I0429 19:02:33.234546   26778 main.go:141] libmachine: (ha-058855-m03) DBG | exit 0
	I0429 19:02:33.238208   26778 main.go:141] libmachine: (ha-058855-m03) DBG | SSH cmd err, output: exit status 255: 
	I0429 19:02:33.238229   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0429 19:02:33.238236   26778 main.go:141] libmachine: (ha-058855-m03) DBG | command : exit 0
	I0429 19:02:33.238241   26778 main.go:141] libmachine: (ha-058855-m03) DBG | err     : exit status 255
	I0429 19:02:33.238282   26778 main.go:141] libmachine: (ha-058855-m03) DBG | output  : 
	I0429 19:02:36.238448   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Getting to WaitForSSH function...
	I0429 19:02:36.240759   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.241126   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.241159   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.241288   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Using SSH client type: external
	I0429 19:02:36.241305   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa (-rw-------)
	I0429 19:02:36.241332   26778 main.go:141] libmachine: (ha-058855-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:02:36.241347   26778 main.go:141] libmachine: (ha-058855-m03) DBG | About to run SSH command:
	I0429 19:02:36.241357   26778 main.go:141] libmachine: (ha-058855-m03) DBG | exit 0
	I0429 19:02:36.370936   26778 main.go:141] libmachine: (ha-058855-m03) DBG | SSH cmd err, output: <nil>: 
	I0429 19:02:36.371201   26778 main.go:141] libmachine: (ha-058855-m03) KVM machine creation complete!
	I0429 19:02:36.371505   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetConfigRaw
	I0429 19:02:36.372035   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:36.372218   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:36.372422   26778 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 19:02:36.372444   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetState
	I0429 19:02:36.373794   26778 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 19:02:36.373815   26778 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 19:02:36.373823   26778 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 19:02:36.373833   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:36.376171   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.376554   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.376577   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.376800   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:36.377013   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.377179   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.377334   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:36.377526   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:02:36.377774   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0429 19:02:36.377788   26778 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 19:02:36.493886   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:02:36.493908   26778 main.go:141] libmachine: Detecting the provisioner...
	I0429 19:02:36.493916   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:36.496489   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.496864   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.496897   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.497041   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:36.497239   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.497395   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.497551   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:36.497737   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:02:36.497944   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0429 19:02:36.497960   26778 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 19:02:36.611677   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 19:02:36.611756   26778 main.go:141] libmachine: found compatible host: buildroot
	I0429 19:02:36.611771   26778 main.go:141] libmachine: Provisioning with buildroot...
	I0429 19:02:36.611783   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetMachineName
	I0429 19:02:36.612077   26778 buildroot.go:166] provisioning hostname "ha-058855-m03"
	I0429 19:02:36.612107   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetMachineName
	I0429 19:02:36.612296   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:36.615206   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.615663   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.615699   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.615838   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:36.616000   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.616186   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.616340   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:36.616522   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:02:36.616700   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0429 19:02:36.616713   26778 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-058855-m03 && echo "ha-058855-m03" | sudo tee /etc/hostname
	I0429 19:02:36.755088   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-058855-m03
	
	I0429 19:02:36.755134   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:36.757679   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.757979   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.758014   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.758219   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:36.758409   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.758550   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:36.758696   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:36.758844   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:02:36.759005   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0429 19:02:36.759022   26778 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-058855-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-058855-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-058855-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:02:36.887249   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:02:36.887285   26778 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:02:36.887302   26778 buildroot.go:174] setting up certificates
	I0429 19:02:36.887313   26778 provision.go:84] configureAuth start
	I0429 19:02:36.887321   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetMachineName
	I0429 19:02:36.887665   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:02:36.890544   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.891010   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.891052   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.891197   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:36.893127   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.893425   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:36.893457   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:36.893582   26778 provision.go:143] copyHostCerts
	I0429 19:02:36.893622   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:02:36.893669   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:02:36.893681   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:02:36.893768   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:02:36.893861   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:02:36.893890   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:02:36.893913   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:02:36.893966   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:02:36.894030   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:02:36.894055   26778 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:02:36.894080   26778 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:02:36.894116   26778 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:02:36.894185   26778 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.ha-058855-m03 san=[127.0.0.1 192.168.39.215 ha-058855-m03 localhost minikube]
	I0429 19:02:37.309547   26778 provision.go:177] copyRemoteCerts
	I0429 19:02:37.309631   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:02:37.309662   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:37.312216   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.312602   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:37.312637   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.312788   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:37.312983   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:37.313179   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:37.313324   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:02:37.402259   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 19:02:37.402353   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:02:37.433368   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 19:02:37.433440   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 19:02:37.462744   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 19:02:37.462823   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:02:37.492777   26778 provision.go:87] duration metric: took 605.454335ms to configureAuth
	I0429 19:02:37.492803   26778 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:02:37.493003   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:02:37.493074   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:37.495751   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.496046   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:37.496079   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.496233   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:37.496448   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:37.496618   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:37.496815   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:37.496993   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:02:37.497190   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0429 19:02:37.497207   26778 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:02:37.809266   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:02:37.809294   26778 main.go:141] libmachine: Checking connection to Docker...
	I0429 19:02:37.809305   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetURL
	I0429 19:02:37.810692   26778 main.go:141] libmachine: (ha-058855-m03) DBG | Using libvirt version 6000000
	I0429 19:02:37.813109   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.813502   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:37.813536   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.813692   26778 main.go:141] libmachine: Docker is up and running!
	I0429 19:02:37.813763   26778 main.go:141] libmachine: Reticulating splines...
	I0429 19:02:37.813779   26778 client.go:171] duration metric: took 29.432154059s to LocalClient.Create
	I0429 19:02:37.813814   26778 start.go:167] duration metric: took 29.432234477s to libmachine.API.Create "ha-058855"
	I0429 19:02:37.813828   26778 start.go:293] postStartSetup for "ha-058855-m03" (driver="kvm2")
	I0429 19:02:37.813841   26778 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:02:37.813864   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:37.814271   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:02:37.814300   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:37.817054   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.817370   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:37.817402   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.817550   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:37.817734   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:37.817880   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:37.818033   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:02:37.910621   26778 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:02:37.915741   26778 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:02:37.915770   26778 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:02:37.915856   26778 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:02:37.915950   26778 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:02:37.915961   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /etc/ssl/certs/151242.pem
	I0429 19:02:37.916067   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:02:37.928269   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:02:37.956261   26778 start.go:296] duration metric: took 142.421236ms for postStartSetup
	I0429 19:02:37.956321   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetConfigRaw
	I0429 19:02:37.957015   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:02:37.959500   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.959944   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:37.959976   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.960290   26778 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 19:02:37.960552   26778 start.go:128] duration metric: took 29.597532358s to createHost
	I0429 19:02:37.960586   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:37.962770   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.963234   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:37.963271   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:37.963433   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:37.963613   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:37.963801   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:37.963972   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:37.964170   26778 main.go:141] libmachine: Using SSH client type: native
	I0429 19:02:37.964399   26778 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0429 19:02:37.964417   26778 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:02:38.083548   26778 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714417358.071138040
	
	I0429 19:02:38.083570   26778 fix.go:216] guest clock: 1714417358.071138040
	I0429 19:02:38.083578   26778 fix.go:229] Guest: 2024-04-29 19:02:38.07113804 +0000 UTC Remote: 2024-04-29 19:02:37.96056996 +0000 UTC m=+232.025782840 (delta=110.56808ms)
	I0429 19:02:38.083592   26778 fix.go:200] guest clock delta is within tolerance: 110.56808ms
	I0429 19:02:38.083596   26778 start.go:83] releasing machines lock for "ha-058855-m03", held for 29.720713421s
	I0429 19:02:38.083611   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:38.083908   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:02:38.086506   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:38.086932   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:38.086962   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:38.089023   26778 out.go:177] * Found network options:
	I0429 19:02:38.090341   26778 out.go:177]   - NO_PROXY=192.168.39.52,192.168.39.27
	W0429 19:02:38.091645   26778 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:02:38.091670   26778 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:02:38.091683   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:38.092207   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:38.092425   26778 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:02:38.092509   26778 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:02:38.092551   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	W0429 19:02:38.092647   26778 proxy.go:119] fail to check proxy env: Error ip not in block
	W0429 19:02:38.092671   26778 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 19:02:38.092768   26778 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:02:38.092790   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:02:38.095236   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:38.095576   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:38.095622   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:38.095649   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:38.095757   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:38.095947   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:38.096130   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:38.096155   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:38.096150   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:38.096329   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:02:38.096343   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:02:38.096504   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:02:38.096680   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:02:38.096859   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:02:38.343466   26778 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:02:38.351513   26778 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:02:38.351589   26778 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:02:38.375353   26778 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:02:38.375379   26778 start.go:494] detecting cgroup driver to use...
	I0429 19:02:38.375452   26778 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:02:38.397208   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:02:38.418298   26778 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:02:38.418422   26778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:02:38.436518   26778 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:02:38.453908   26778 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:02:38.588024   26778 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:02:38.745271   26778 docker.go:233] disabling docker service ...
	I0429 19:02:38.745365   26778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:02:38.762514   26778 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:02:38.779768   26778 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:02:38.939144   26778 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:02:39.088367   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:02:39.104841   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:02:39.129824   26778 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 19:02:39.129879   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.142601   26778 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:02:39.142674   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.154592   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.166689   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.179184   26778 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:02:39.192067   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.204521   26778 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.226575   26778 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:02:39.238581   26778 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:02:39.248932   26778 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 19:02:39.248996   26778 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 19:02:39.266556   26778 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:02:39.279346   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:02:39.434284   26778 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:02:39.601503   26778 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:02:39.601594   26778 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:02:39.607316   26778 start.go:562] Will wait 60s for crictl version
	I0429 19:02:39.607388   26778 ssh_runner.go:195] Run: which crictl
	I0429 19:02:39.611697   26778 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:02:39.659249   26778 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:02:39.659378   26778 ssh_runner.go:195] Run: crio --version
	I0429 19:02:39.690516   26778 ssh_runner.go:195] Run: crio --version
	I0429 19:02:39.729860   26778 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 19:02:39.731245   26778 out.go:177]   - env NO_PROXY=192.168.39.52
	I0429 19:02:39.732491   26778 out.go:177]   - env NO_PROXY=192.168.39.52,192.168.39.27
	I0429 19:02:39.733604   26778 main.go:141] libmachine: (ha-058855-m03) Calling .GetIP
	I0429 19:02:39.736040   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:39.736447   26778 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:02:39.736470   26778 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:02:39.736659   26778 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 19:02:39.742285   26778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:02:39.756775   26778 mustload.go:65] Loading cluster: ha-058855
	I0429 19:02:39.757045   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:02:39.757316   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:02:39.757351   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:02:39.773951   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0429 19:02:39.774445   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:02:39.774932   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:02:39.774961   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:02:39.775297   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:02:39.775505   26778 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:02:39.777196   26778 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:02:39.777471   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:02:39.777505   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:02:39.792184   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I0429 19:02:39.792554   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:02:39.793011   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:02:39.793038   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:02:39.793327   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:02:39.793506   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:02:39.793691   26778 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855 for IP: 192.168.39.215
	I0429 19:02:39.793706   26778 certs.go:194] generating shared ca certs ...
	I0429 19:02:39.793721   26778 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:02:39.793849   26778 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:02:39.793893   26778 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:02:39.793904   26778 certs.go:256] generating profile certs ...
	I0429 19:02:39.793971   26778 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key
	I0429 19:02:39.794003   26778 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.9163a6e8
	I0429 19:02:39.794035   26778 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.9163a6e8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.52 192.168.39.27 192.168.39.215 192.168.39.254]
	I0429 19:02:39.991904   26778 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.9163a6e8 ...
	I0429 19:02:39.991934   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.9163a6e8: {Name:mkf6aafe3c448ab66972fe7404e3da8fa4ed24be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:02:39.992108   26778 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.9163a6e8 ...
	I0429 19:02:39.992125   26778 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.9163a6e8: {Name:mk5a0d385f233676a34eab1265452db88346fefc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:02:39.992226   26778 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.9163a6e8 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt
	I0429 19:02:39.992394   26778 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.9163a6e8 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key
	I0429 19:02:39.992561   26778 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key
	I0429 19:02:39.992580   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:02:39.992601   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:02:39.992621   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:02:39.992643   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:02:39.992660   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:02:39.992677   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:02:39.992694   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:02:39.992711   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:02:39.992773   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:02:39.992812   26778 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:02:39.992825   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:02:39.992855   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:02:39.992885   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:02:39.992911   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:02:39.992964   26778 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:02:39.993006   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:02:39.993025   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem -> /usr/share/ca-certificates/15124.pem
	I0429 19:02:39.993043   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /usr/share/ca-certificates/151242.pem
	I0429 19:02:39.993081   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:02:39.996247   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:02:39.996613   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:02:39.996640   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:02:39.996754   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:02:39.996959   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:02:39.997100   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:02:39.997217   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:02:40.086469   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0429 19:02:40.093590   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0429 19:02:40.107018   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0429 19:02:40.113388   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0429 19:02:40.128724   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0429 19:02:40.133791   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0429 19:02:40.148153   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0429 19:02:40.153260   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0429 19:02:40.167027   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0429 19:02:40.172164   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0429 19:02:40.184500   26778 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0429 19:02:40.189911   26778 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0429 19:02:40.202889   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:02:40.234045   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:02:40.260292   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:02:40.287864   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:02:40.316781   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0429 19:02:40.345179   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 19:02:40.373750   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:02:40.403708   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 19:02:40.432090   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:02:40.459413   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:02:40.490901   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:02:40.518701   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0429 19:02:40.538907   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0429 19:02:40.559221   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0429 19:02:40.578670   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0429 19:02:40.598072   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0429 19:02:40.616893   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0429 19:02:40.636592   26778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0429 19:02:40.655859   26778 ssh_runner.go:195] Run: openssl version
	I0429 19:02:40.662093   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:02:40.674096   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:02:40.679433   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:02:40.679485   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:02:40.685942   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:02:40.698587   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:02:40.711531   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:02:40.717116   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:02:40.717184   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:02:40.724132   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:02:40.736969   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:02:40.749856   26778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:02:40.755390   26778 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:02:40.755438   26778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:02:40.761984   26778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:02:40.774254   26778 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:02:40.779102   26778 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:02:40.779168   26778 kubeadm.go:928] updating node {m03 192.168.39.215 8443 v1.30.0 crio true true} ...
	I0429 19:02:40.779258   26778 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-058855-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:02:40.779288   26778 kube-vip.go:115] generating kube-vip config ...
	I0429 19:02:40.779317   26778 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 19:02:40.797341   26778 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 19:02:40.797421   26778 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 19:02:40.797483   26778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:02:40.809371   26778 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0429 19:02:40.809435   26778 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0429 19:02:40.821464   26778 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0429 19:02:40.821473   26778 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0429 19:02:40.821489   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:02:40.821509   26778 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0429 19:02:40.821516   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:02:40.821529   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:02:40.821575   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0429 19:02:40.821594   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0429 19:02:40.841709   26778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0429 19:02:40.841752   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0429 19:02:40.841754   26778 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:02:40.841829   26778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0429 19:02:40.841845   26778 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0429 19:02:40.841855   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0429 19:02:40.895363   26778 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0429 19:02:40.895410   26778 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0429 19:02:41.867601   26778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0429 19:02:41.880075   26778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0429 19:02:41.900860   26778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:02:41.921885   26778 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0429 19:02:41.943067   26778 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 19:02:41.948246   26778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:02:41.964016   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:02:42.112311   26778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:02:42.135230   26778 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:02:42.135576   26778 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:02:42.135614   26778 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:02:42.152743   26778 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44291
	I0429 19:02:42.153274   26778 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:02:42.153761   26778 main.go:141] libmachine: Using API Version  1
	I0429 19:02:42.153785   26778 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:02:42.154122   26778 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:02:42.154324   26778 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:02:42.154469   26778 start.go:316] joinCluster: &{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:02:42.154646   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0429 19:02:42.154672   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:02:42.158209   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:02:42.158721   26778 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:02:42.158756   26778 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:02:42.158969   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:02:42.159195   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:02:42.159371   26778 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:02:42.159552   26778 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:02:42.362024   26778 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:02:42.362090   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0snnwf.nqbstml13rkzgrsg --discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-058855-m03 --control-plane --apiserver-advertise-address=192.168.39.215 --apiserver-bind-port=8443"
	I0429 19:03:07.495774   26778 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0snnwf.nqbstml13rkzgrsg --discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-058855-m03 --control-plane --apiserver-advertise-address=192.168.39.215 --apiserver-bind-port=8443": (25.133648499s)
	I0429 19:03:07.495812   26778 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0429 19:03:08.135577   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-058855-m03 minikube.k8s.io/updated_at=2024_04_29T19_03_08_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-058855 minikube.k8s.io/primary=false
	I0429 19:03:08.280836   26778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-058855-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0429 19:03:08.434664   26778 start.go:318] duration metric: took 26.280192185s to joinCluster
	I0429 19:03:08.434750   26778 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:03:08.436414   26778 out.go:177] * Verifying Kubernetes components...
	I0429 19:03:08.435176   26778 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:03:08.437771   26778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:03:08.666204   26778 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:03:08.683821   26778 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:03:08.684159   26778 kapi.go:59] client config for ha-058855: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.crt", KeyFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key", CAFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0429 19:03:08.684257   26778 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.52:8443
	I0429 19:03:08.684575   26778 node_ready.go:35] waiting up to 6m0s for node "ha-058855-m03" to be "Ready" ...
	I0429 19:03:08.684675   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:08.684687   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:08.684697   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:08.684706   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:08.688603   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:09.184793   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:09.184818   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:09.184827   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:09.184831   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:09.189995   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:03:09.685424   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:09.685448   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:09.685459   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:09.685464   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:09.689593   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:10.185544   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:10.185567   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:10.185576   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:10.185581   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:10.190409   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:10.684943   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:10.684963   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:10.684969   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:10.684972   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:10.689628   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:10.690791   26778 node_ready.go:53] node "ha-058855-m03" has status "Ready":"False"
	I0429 19:03:11.185285   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:11.185315   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:11.185327   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:11.185332   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:11.188959   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:11.684929   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:11.684950   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:11.684961   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:11.684966   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:11.689126   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:12.185695   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:12.185720   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:12.185737   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:12.185744   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:12.190216   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:12.685151   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:12.685177   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:12.685186   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:12.685189   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:12.689239   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:13.185654   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:13.185681   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:13.185691   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:13.185695   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:13.190613   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:13.191427   26778 node_ready.go:53] node "ha-058855-m03" has status "Ready":"False"
	I0429 19:03:13.685743   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:13.685766   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:13.685774   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:13.685778   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:13.692609   26778 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0429 19:03:14.185752   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:14.185772   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:14.185780   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:14.185786   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:14.190560   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:14.684852   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:14.684871   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:14.684879   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:14.684885   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:14.688661   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:15.185463   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:15.185492   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:15.185502   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:15.185507   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:15.190585   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:03:15.191676   26778 node_ready.go:53] node "ha-058855-m03" has status "Ready":"False"
	I0429 19:03:15.685666   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:15.685692   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:15.685703   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:15.685710   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:15.697883   26778 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0429 19:03:16.185080   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:16.185102   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.185110   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.185116   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.189212   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:16.189931   26778 node_ready.go:49] node "ha-058855-m03" has status "Ready":"True"
	I0429 19:03:16.189947   26778 node_ready.go:38] duration metric: took 7.505347329s for node "ha-058855-m03" to be "Ready" ...
	I0429 19:03:16.189955   26778 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:03:16.190009   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:03:16.190018   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.190025   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.190029   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.197217   26778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:03:16.204930   26778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bbq9x" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.205009   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-bbq9x
	I0429 19:03:16.205018   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.205025   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.205030   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.208475   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.209464   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:16.209482   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.209494   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.209500   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.213112   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.213556   26778 pod_ready.go:92] pod "coredns-7db6d8ff4d-bbq9x" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:16.213573   26778 pod_ready.go:81] duration metric: took 8.617213ms for pod "coredns-7db6d8ff4d-bbq9x" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.213585   26778 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-njch8" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.213642   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-njch8
	I0429 19:03:16.213667   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.213681   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.213693   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.217199   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.217860   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:16.217875   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.217881   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.217884   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.220793   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:16.221539   26778 pod_ready.go:92] pod "coredns-7db6d8ff4d-njch8" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:16.221561   26778 pod_ready.go:81] duration metric: took 7.964356ms for pod "coredns-7db6d8ff4d-njch8" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.221573   26778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.221642   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855
	I0429 19:03:16.221650   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.221657   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.221664   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.224856   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.225524   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:16.225538   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.225545   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.225548   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.228517   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:16.229130   26778 pod_ready.go:92] pod "etcd-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:16.229154   26778 pod_ready.go:81] duration metric: took 7.568737ms for pod "etcd-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.229167   26778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.229236   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m02
	I0429 19:03:16.229248   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.229258   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.229269   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.232144   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:16.232920   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:16.232938   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.232948   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.232954   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.235772   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:16.236461   26778 pod_ready.go:92] pod "etcd-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:16.236476   26778 pod_ready.go:81] duration metric: took 7.297385ms for pod "etcd-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.236485   26778 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:16.385852   26778 request.go:629] Waited for 149.315468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:16.385926   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:16.385932   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.385938   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.385942   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.389444   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.585762   26778 request.go:629] Waited for 195.359427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:16.585816   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:16.585821   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.585831   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.585836   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.589427   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.785531   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:16.785566   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.785576   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.785584   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.789426   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:16.985809   26778 request.go:629] Waited for 195.39075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:16.985896   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:16.985904   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:16.985914   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:16.985922   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:16.990297   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:17.236736   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:17.236763   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:17.236774   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:17.236783   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:17.241950   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:03:17.385859   26778 request.go:629] Waited for 142.304779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:17.385918   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:17.385923   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:17.385930   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:17.385933   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:17.389434   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:17.737476   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:17.737502   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:17.737508   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:17.737512   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:17.741096   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:17.785407   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:17.785426   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:17.785434   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:17.785445   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:17.788949   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:18.236936   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:18.236957   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:18.236965   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:18.236969   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:18.241328   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:18.242530   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:18.242547   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:18.242559   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:18.242567   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:18.246029   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:18.247007   26778 pod_ready.go:102] pod "etcd-ha-058855-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 19:03:18.737275   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:18.737298   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:18.737306   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:18.737311   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:18.740677   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:18.741639   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:18.741657   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:18.741665   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:18.741670   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:18.745453   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:19.237477   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:19.237502   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:19.237510   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:19.237512   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:19.243186   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:03:19.243915   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:19.243929   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:19.243936   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:19.243940   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:19.248158   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:19.736926   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:19.736944   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:19.736951   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:19.736955   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:19.741603   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:19.742409   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:19.742429   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:19.742440   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:19.742446   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:19.745289   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:20.236917   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:20.236941   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:20.236948   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:20.236952   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:20.240954   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:20.241911   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:20.241930   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:20.241940   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:20.241946   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:20.246457   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:20.247195   26778 pod_ready.go:102] pod "etcd-ha-058855-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 19:03:20.737686   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:20.737706   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:20.737714   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:20.737720   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:20.741368   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:20.742258   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:20.742277   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:20.742288   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:20.742295   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:20.746684   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:21.236629   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:21.236651   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:21.236660   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:21.236664   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:21.240181   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:21.240922   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:21.240935   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:21.240942   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:21.240945   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:21.243692   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:21.737261   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:21.737286   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:21.737293   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:21.737299   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:21.741088   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:21.742043   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:21.742081   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:21.742097   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:21.742104   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:21.746837   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:22.238016   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:22.238134   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:22.238155   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:22.238163   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:22.243097   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:22.243938   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:22.243952   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:22.243959   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:22.243963   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:22.247173   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:22.247951   26778 pod_ready.go:102] pod "etcd-ha-058855-m03" in "kube-system" namespace has status "Ready":"False"
	I0429 19:03:22.737178   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:22.737201   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:22.737210   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:22.737217   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:22.741015   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:22.741963   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:22.741981   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:22.741989   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:22.741994   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:22.744852   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:23.236994   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:23.237017   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.237026   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.237030   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.240942   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:23.242037   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:23.242055   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.242079   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.242085   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.246311   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:23.737687   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/etcd-ha-058855-m03
	I0429 19:03:23.737715   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.737723   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.737727   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.741346   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:23.742163   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:23.742183   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.742193   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.742202   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.745980   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:23.747161   26778 pod_ready.go:92] pod "etcd-ha-058855-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:23.747177   26778 pod_ready.go:81] duration metric: took 7.510686398s for pod "etcd-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.747195   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.747244   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-058855
	I0429 19:03:23.747252   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.747259   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.747264   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.750007   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:23.750784   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:23.750798   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.750804   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.750808   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.753646   26778 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 19:03:23.754347   26778 pod_ready.go:92] pod "kube-apiserver-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:23.754369   26778 pod_ready.go:81] duration metric: took 7.166746ms for pod "kube-apiserver-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.754382   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.754449   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-058855-m02
	I0429 19:03:23.754461   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.754470   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.754480   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.757583   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:23.758348   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:23.758369   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.758379   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.758386   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.761583   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:23.762376   26778 pod_ready.go:92] pod "kube-apiserver-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:23.762403   26778 pod_ready.go:81] duration metric: took 8.008595ms for pod "kube-apiserver-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.762416   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.762477   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-058855-m03
	I0429 19:03:23.762489   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.762498   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.762506   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.765600   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:23.785577   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:23.785600   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.785614   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.785624   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.789644   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:23.790113   26778 pod_ready.go:92] pod "kube-apiserver-ha-058855-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:23.790135   26778 pod_ready.go:81] duration metric: took 27.710177ms for pod "kube-apiserver-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.790152   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:23.985599   26778 request.go:629] Waited for 195.362216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855
	I0429 19:03:23.985743   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855
	I0429 19:03:23.985760   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:23.985770   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:23.985780   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:23.990565   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:24.185616   26778 request.go:629] Waited for 194.385468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:24.185685   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:24.185691   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:24.185698   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:24.185701   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:24.191403   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:03:24.192456   26778 pod_ready.go:92] pod "kube-controller-manager-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:24.192487   26778 pod_ready.go:81] duration metric: took 402.32346ms for pod "kube-controller-manager-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:24.192501   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:24.385491   26778 request.go:629] Waited for 192.913821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855-m02
	I0429 19:03:24.385587   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855-m02
	I0429 19:03:24.385600   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:24.385635   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:24.385649   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:24.389934   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:24.585411   26778 request.go:629] Waited for 194.33868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:24.585462   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:24.585467   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:24.585474   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:24.585480   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:24.589092   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:24.589922   26778 pod_ready.go:92] pod "kube-controller-manager-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:24.589940   26778 pod_ready.go:81] duration metric: took 397.432121ms for pod "kube-controller-manager-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:24.589950   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:24.785450   26778 request.go:629] Waited for 195.433354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855-m03
	I0429 19:03:24.785524   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-058855-m03
	I0429 19:03:24.785534   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:24.785546   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:24.785558   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:24.789190   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:24.985397   26778 request.go:629] Waited for 195.408341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:24.985451   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:24.985456   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:24.985464   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:24.985468   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:24.989538   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:24.990200   26778 pod_ready.go:92] pod "kube-controller-manager-ha-058855-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:24.990220   26778 pod_ready.go:81] duration metric: took 400.262823ms for pod "kube-controller-manager-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:24.990234   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29svc" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:25.185154   26778 request.go:629] Waited for 194.843168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29svc
	I0429 19:03:25.185213   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29svc
	I0429 19:03:25.185227   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:25.185239   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:25.185248   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:25.189244   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:25.385293   26778 request.go:629] Waited for 195.292348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:25.385381   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:25.385392   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:25.385402   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:25.385411   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:25.389467   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:25.390387   26778 pod_ready.go:92] pod "kube-proxy-29svc" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:25.390408   26778 pod_ready.go:81] duration metric: took 400.167281ms for pod "kube-proxy-29svc" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:25.390420   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nz2rv" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:25.585353   26778 request.go:629] Waited for 194.866158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz2rv
	I0429 19:03:25.585427   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nz2rv
	I0429 19:03:25.585432   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:25.585445   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:25.585463   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:25.589742   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:25.785860   26778 request.go:629] Waited for 195.365291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:25.785937   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:25.785942   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:25.785950   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:25.785956   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:25.789868   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:25.790609   26778 pod_ready.go:92] pod "kube-proxy-nz2rv" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:25.790627   26778 pod_ready.go:81] duration metric: took 400.194931ms for pod "kube-proxy-nz2rv" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:25.790636   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xldlc" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:25.986077   26778 request.go:629] Waited for 195.357381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xldlc
	I0429 19:03:25.986136   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xldlc
	I0429 19:03:25.986141   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:25.986149   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:25.986154   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:25.990111   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:26.185751   26778 request.go:629] Waited for 194.862355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:26.185836   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:26.185850   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:26.185860   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:26.185868   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:26.190387   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:26.191230   26778 pod_ready.go:92] pod "kube-proxy-xldlc" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:26.191251   26778 pod_ready.go:81] duration metric: took 400.608193ms for pod "kube-proxy-xldlc" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:26.191261   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:26.385320   26778 request.go:629] Waited for 193.992199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855
	I0429 19:03:26.385421   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855
	I0429 19:03:26.385432   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:26.385444   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:26.385453   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:26.389560   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:26.585558   26778 request.go:629] Waited for 195.251013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:26.585606   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855
	I0429 19:03:26.585611   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:26.585618   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:26.585621   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:26.589363   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:26.590528   26778 pod_ready.go:92] pod "kube-scheduler-ha-058855" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:26.590547   26778 pod_ready.go:81] duration metric: took 399.280221ms for pod "kube-scheduler-ha-058855" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:26.590556   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:26.785667   26778 request.go:629] Waited for 195.046202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855-m02
	I0429 19:03:26.785754   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855-m02
	I0429 19:03:26.785760   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:26.785777   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:26.785792   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:26.790042   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:26.985182   26778 request.go:629] Waited for 194.237698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:26.985263   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m02
	I0429 19:03:26.985275   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:26.985285   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:26.985293   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:26.989513   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:26.990273   26778 pod_ready.go:92] pod "kube-scheduler-ha-058855-m02" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:26.990291   26778 pod_ready.go:81] duration metric: took 399.728731ms for pod "kube-scheduler-ha-058855-m02" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:26.990312   26778 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:27.185770   26778 request.go:629] Waited for 195.380719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855-m03
	I0429 19:03:27.185863   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-058855-m03
	I0429 19:03:27.185874   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:27.185886   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:27.185895   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:27.189595   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:27.385739   26778 request.go:629] Waited for 195.383138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:27.385818   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes/ha-058855-m03
	I0429 19:03:27.385828   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:27.385838   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:27.385849   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:27.389594   26778 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0429 19:03:27.390414   26778 pod_ready.go:92] pod "kube-scheduler-ha-058855-m03" in "kube-system" namespace has status "Ready":"True"
	I0429 19:03:27.390438   26778 pod_ready.go:81] duration metric: took 400.115122ms for pod "kube-scheduler-ha-058855-m03" in "kube-system" namespace to be "Ready" ...
	I0429 19:03:27.390451   26778 pod_ready.go:38] duration metric: took 11.20048647s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:03:27.390463   26778 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:03:27.390512   26778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:03:27.408969   26778 api_server.go:72] duration metric: took 18.97418101s to wait for apiserver process to appear ...
	I0429 19:03:27.408993   26778 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:03:27.409017   26778 api_server.go:253] Checking apiserver healthz at https://192.168.39.52:8443/healthz ...
	I0429 19:03:27.415338   26778 api_server.go:279] https://192.168.39.52:8443/healthz returned 200:
	ok
	I0429 19:03:27.415400   26778 round_trippers.go:463] GET https://192.168.39.52:8443/version
	I0429 19:03:27.415407   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:27.415414   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:27.415418   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:27.416436   26778 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0429 19:03:27.416557   26778 api_server.go:141] control plane version: v1.30.0
	I0429 19:03:27.416577   26778 api_server.go:131] duration metric: took 7.576605ms to wait for apiserver health ...
	I0429 19:03:27.416587   26778 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:03:27.586019   26778 request.go:629] Waited for 169.347655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:03:27.586101   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:03:27.586109   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:27.586117   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:27.586126   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:27.593529   26778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:03:27.601873   26778 system_pods.go:59] 24 kube-system pods found
	I0429 19:03:27.601901   26778 system_pods.go:61] "coredns-7db6d8ff4d-bbq9x" [a016fbf8-4a91-4f2f-97da-44b6e2195885] Running
	I0429 19:03:27.601906   26778 system_pods.go:61] "coredns-7db6d8ff4d-njch8" [823d223d-f7bd-4b9c-bdd9-8d0ae063d449] Running
	I0429 19:03:27.601911   26778 system_pods.go:61] "etcd-ha-058855" [a7e579b9-771a-4bb2-819b-a98848f52b09] Running
	I0429 19:03:27.601914   26778 system_pods.go:61] "etcd-ha-058855-m02" [08e98635-58d8-460b-9432-4bb03c74099c] Running
	I0429 19:03:27.601917   26778 system_pods.go:61] "etcd-ha-058855-m03" [829b8eb9-5772-4861-9de4-57e88f869a71] Running
	I0429 19:03:27.601920   26778 system_pods.go:61] "kindnet-j42cd" [13d10343-b59f-490f-ac7c-973271cc27d2] Running
	I0429 19:03:27.601923   26778 system_pods.go:61] "kindnet-m4fgv" [be3e3c54-e4e3-42ff-8433-1411fbd7ef75] Running
	I0429 19:03:27.601925   26778 system_pods.go:61] "kindnet-xdtp4" [510a69a6-5bd3-44ba-a81f-6d35a38b6ad2] Running
	I0429 19:03:27.601928   26778 system_pods.go:61] "kube-apiserver-ha-058855" [d2eb7bde-88b9-4366-be20-593097820579] Running
	I0429 19:03:27.601931   26778 system_pods.go:61] "kube-apiserver-ha-058855-m02" [94599f7a-b9de-4db3-b858-a380793bbd34] Running
	I0429 19:03:27.601934   26778 system_pods.go:61] "kube-apiserver-ha-058855-m03" [db757bbb-f7b3-472f-a22a-7b828d6fa543] Running
	I0429 19:03:27.601938   26778 system_pods.go:61] "kube-controller-manager-ha-058855" [56527f4a-57d1-4a44-be01-7747abcbfce0] Running
	I0429 19:03:27.601941   26778 system_pods.go:61] "kube-controller-manager-ha-058855-m02" [201796e2-157c-40ce-bf68-c2472bab9e3a] Running
	I0429 19:03:27.601945   26778 system_pods.go:61] "kube-controller-manager-ha-058855-m03" [a8046d54-c4bf-4152-b27a-19555664e7de] Running
	I0429 19:03:27.601948   26778 system_pods.go:61] "kube-proxy-29svc" [1c889e3e-7390-4e06-8bf3-424117496b4b] Running
	I0429 19:03:27.601952   26778 system_pods.go:61] "kube-proxy-nz2rv" [32002a66-d55f-4011-bb78-c4c6e35238b3] Running
	I0429 19:03:27.601957   26778 system_pods.go:61] "kube-proxy-xldlc" [a01564cb-ea76-4cc5-abad-d2d70b79bf6d] Running
	I0429 19:03:27.601960   26778 system_pods.go:61] "kube-scheduler-ha-058855" [d71e876d-d5be-4671-924b-3fd828de92a1] Running
	I0429 19:03:27.601963   26778 system_pods.go:61] "kube-scheduler-ha-058855-m02" [69bbddf9-e5f6-4ede-abd0-762b0642fda4] Running
	I0429 19:03:27.601967   26778 system_pods.go:61] "kube-scheduler-ha-058855-m03" [7d259b08-e0c4-4424-bc8f-1171f5fe7739] Running
	I0429 19:03:27.601973   26778 system_pods.go:61] "kube-vip-ha-058855" [76e512c7-e0ea-417e-8239-63bb073dc04d] Running
	I0429 19:03:27.601975   26778 system_pods.go:61] "kube-vip-ha-058855-m02" [1569a60d-d6a1-4685-8405-689270322b97] Running
	I0429 19:03:27.601979   26778 system_pods.go:61] "kube-vip-ha-058855-m03" [aa222d89-ec33-45a5-b1f4-296e4b89c4b7] Running
	I0429 19:03:27.601982   26778 system_pods.go:61] "storage-provisioner" [1572f7da-1bda-4b9e-a5fc-315aae3ba592] Running
	I0429 19:03:27.601988   26778 system_pods.go:74] duration metric: took 185.395278ms to wait for pod list to return data ...
	I0429 19:03:27.601998   26778 default_sa.go:34] waiting for default service account to be created ...
	I0429 19:03:27.785435   26778 request.go:629] Waited for 183.349656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:03:27.785499   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/default/serviceaccounts
	I0429 19:03:27.785504   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:27.785512   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:27.785516   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:27.790928   26778 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 19:03:27.791073   26778 default_sa.go:45] found service account: "default"
	I0429 19:03:27.791093   26778 default_sa.go:55] duration metric: took 189.089492ms for default service account to be created ...
	I0429 19:03:27.791105   26778 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 19:03:27.985568   26778 request.go:629] Waited for 194.356514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:03:27.985643   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/namespaces/kube-system/pods
	I0429 19:03:27.985648   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:27.985656   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:27.985660   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:27.992905   26778 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0429 19:03:28.001131   26778 system_pods.go:86] 24 kube-system pods found
	I0429 19:03:28.001170   26778 system_pods.go:89] "coredns-7db6d8ff4d-bbq9x" [a016fbf8-4a91-4f2f-97da-44b6e2195885] Running
	I0429 19:03:28.001179   26778 system_pods.go:89] "coredns-7db6d8ff4d-njch8" [823d223d-f7bd-4b9c-bdd9-8d0ae063d449] Running
	I0429 19:03:28.001185   26778 system_pods.go:89] "etcd-ha-058855" [a7e579b9-771a-4bb2-819b-a98848f52b09] Running
	I0429 19:03:28.001192   26778 system_pods.go:89] "etcd-ha-058855-m02" [08e98635-58d8-460b-9432-4bb03c74099c] Running
	I0429 19:03:28.001198   26778 system_pods.go:89] "etcd-ha-058855-m03" [829b8eb9-5772-4861-9de4-57e88f869a71] Running
	I0429 19:03:28.001206   26778 system_pods.go:89] "kindnet-j42cd" [13d10343-b59f-490f-ac7c-973271cc27d2] Running
	I0429 19:03:28.001212   26778 system_pods.go:89] "kindnet-m4fgv" [be3e3c54-e4e3-42ff-8433-1411fbd7ef75] Running
	I0429 19:03:28.001218   26778 system_pods.go:89] "kindnet-xdtp4" [510a69a6-5bd3-44ba-a81f-6d35a38b6ad2] Running
	I0429 19:03:28.001224   26778 system_pods.go:89] "kube-apiserver-ha-058855" [d2eb7bde-88b9-4366-be20-593097820579] Running
	I0429 19:03:28.001230   26778 system_pods.go:89] "kube-apiserver-ha-058855-m02" [94599f7a-b9de-4db3-b858-a380793bbd34] Running
	I0429 19:03:28.001237   26778 system_pods.go:89] "kube-apiserver-ha-058855-m03" [db757bbb-f7b3-472f-a22a-7b828d6fa543] Running
	I0429 19:03:28.001243   26778 system_pods.go:89] "kube-controller-manager-ha-058855" [56527f4a-57d1-4a44-be01-7747abcbfce0] Running
	I0429 19:03:28.001255   26778 system_pods.go:89] "kube-controller-manager-ha-058855-m02" [201796e2-157c-40ce-bf68-c2472bab9e3a] Running
	I0429 19:03:28.001263   26778 system_pods.go:89] "kube-controller-manager-ha-058855-m03" [a8046d54-c4bf-4152-b27a-19555664e7de] Running
	I0429 19:03:28.001280   26778 system_pods.go:89] "kube-proxy-29svc" [1c889e3e-7390-4e06-8bf3-424117496b4b] Running
	I0429 19:03:28.001287   26778 system_pods.go:89] "kube-proxy-nz2rv" [32002a66-d55f-4011-bb78-c4c6e35238b3] Running
	I0429 19:03:28.001293   26778 system_pods.go:89] "kube-proxy-xldlc" [a01564cb-ea76-4cc5-abad-d2d70b79bf6d] Running
	I0429 19:03:28.001303   26778 system_pods.go:89] "kube-scheduler-ha-058855" [d71e876d-d5be-4671-924b-3fd828de92a1] Running
	I0429 19:03:28.001309   26778 system_pods.go:89] "kube-scheduler-ha-058855-m02" [69bbddf9-e5f6-4ede-abd0-762b0642fda4] Running
	I0429 19:03:28.001315   26778 system_pods.go:89] "kube-scheduler-ha-058855-m03" [7d259b08-e0c4-4424-bc8f-1171f5fe7739] Running
	I0429 19:03:28.001325   26778 system_pods.go:89] "kube-vip-ha-058855" [76e512c7-e0ea-417e-8239-63bb073dc04d] Running
	I0429 19:03:28.001331   26778 system_pods.go:89] "kube-vip-ha-058855-m02" [1569a60d-d6a1-4685-8405-689270322b97] Running
	I0429 19:03:28.001340   26778 system_pods.go:89] "kube-vip-ha-058855-m03" [aa222d89-ec33-45a5-b1f4-296e4b89c4b7] Running
	I0429 19:03:28.001346   26778 system_pods.go:89] "storage-provisioner" [1572f7da-1bda-4b9e-a5fc-315aae3ba592] Running
	I0429 19:03:28.001359   26778 system_pods.go:126] duration metric: took 210.243362ms to wait for k8s-apps to be running ...
	I0429 19:03:28.001370   26778 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 19:03:28.001424   26778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:03:28.018131   26778 system_svc.go:56] duration metric: took 16.748659ms WaitForService to wait for kubelet
	I0429 19:03:28.018167   26778 kubeadm.go:576] duration metric: took 19.583380603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:03:28.018189   26778 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:03:28.185610   26778 request.go:629] Waited for 167.343861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.52:8443/api/v1/nodes
	I0429 19:03:28.185695   26778 round_trippers.go:463] GET https://192.168.39.52:8443/api/v1/nodes
	I0429 19:03:28.185704   26778 round_trippers.go:469] Request Headers:
	I0429 19:03:28.185717   26778 round_trippers.go:473]     Accept: application/json, */*
	I0429 19:03:28.185725   26778 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0429 19:03:28.190267   26778 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0429 19:03:28.191669   26778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:03:28.191695   26778 node_conditions.go:123] node cpu capacity is 2
	I0429 19:03:28.191709   26778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:03:28.191714   26778 node_conditions.go:123] node cpu capacity is 2
	I0429 19:03:28.191718   26778 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:03:28.191722   26778 node_conditions.go:123] node cpu capacity is 2
	I0429 19:03:28.191731   26778 node_conditions.go:105] duration metric: took 173.532452ms to run NodePressure ...
	I0429 19:03:28.191750   26778 start.go:240] waiting for startup goroutines ...
	I0429 19:03:28.191774   26778 start.go:254] writing updated cluster config ...
	I0429 19:03:28.192169   26778 ssh_runner.go:195] Run: rm -f paused
	I0429 19:03:28.245165   26778 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 19:03:28.247274   26778 out.go:177] * Done! kubectl is now configured to use "ha-058855" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.014935671Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714417687014904581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=953599b3-19cd-4767-88b0-1ad70b3367d4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.016275664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9c20362-f35e-4465-9262-2f2f0a3c63f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.016327869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9c20362-f35e-4465-9262-2f2f0a3c63f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.016573328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417414458881064,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9fee3659bbbc0cfcb39700e786b8abaca5828c3a369213c71f8c24aead35f1,PodSandboxId:7535117780f63199f4d557275f58c4dbd45457c95f56a37f6dc4909ddb1934dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714417187571512441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187512853039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187459474931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a
91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ea9216c1d7c2ce6fc652bc1f2020e90ddd86266e6494480d19d53d424bfc01,PodSandboxId:99a43785ac56c5dd7e66b63e069f2b805e50ab4d83c6949997dd6ae7806b297e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17144171
84995953953,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417184691594429,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45ced81842ab99aabac98f2ac5d6e1b110a73465d11e56c87d6166d153839862,PodSandboxId:092f8bef902efe571a7c4bb49769bc4109d8855d291b7678d17ea4c9ea1e72fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417166093403108,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab58bfc4970fad85a73d065ba4eec99e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417163290366549,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9513857b60ae4b75efae6de6be9d83d589f9d511ba539d01bc7e371a1a0e090,PodSandboxId:d5c792e26a63f5182b337b3916dad1dff032b53207ab9bc1da61cbaee803b342,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417163246853598,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9139aba22c80eaaf47d55790db8284fc4c3d959ba23904a36880d4d936f4622,PodSandboxId:5dc22f2ba00277c3f8923983e3b802392c4264210a68e2e15c1e7fae5c399b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417163227484503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417163188534709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9c20362-f35e-4465-9262-2f2f0a3c63f5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.059146071Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ea767da-5d26-42f0-9ab2-9c4868182b37 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.059249552Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ea767da-5d26-42f0-9ab2-9c4868182b37 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.061055289Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62ac024a-ef4c-4d21-bdbf-671d2e440243 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.061521635Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714417687061498008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62ac024a-ef4c-4d21-bdbf-671d2e440243 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.062336781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3346f3c5-6581-4edd-9194-6e811e723925 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.062388092Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3346f3c5-6581-4edd-9194-6e811e723925 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.062607828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417414458881064,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9fee3659bbbc0cfcb39700e786b8abaca5828c3a369213c71f8c24aead35f1,PodSandboxId:7535117780f63199f4d557275f58c4dbd45457c95f56a37f6dc4909ddb1934dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714417187571512441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187512853039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187459474931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a
91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ea9216c1d7c2ce6fc652bc1f2020e90ddd86266e6494480d19d53d424bfc01,PodSandboxId:99a43785ac56c5dd7e66b63e069f2b805e50ab4d83c6949997dd6ae7806b297e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17144171
84995953953,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417184691594429,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45ced81842ab99aabac98f2ac5d6e1b110a73465d11e56c87d6166d153839862,PodSandboxId:092f8bef902efe571a7c4bb49769bc4109d8855d291b7678d17ea4c9ea1e72fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417166093403108,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab58bfc4970fad85a73d065ba4eec99e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417163290366549,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9513857b60ae4b75efae6de6be9d83d589f9d511ba539d01bc7e371a1a0e090,PodSandboxId:d5c792e26a63f5182b337b3916dad1dff032b53207ab9bc1da61cbaee803b342,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417163246853598,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9139aba22c80eaaf47d55790db8284fc4c3d959ba23904a36880d4d936f4622,PodSandboxId:5dc22f2ba00277c3f8923983e3b802392c4264210a68e2e15c1e7fae5c399b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417163227484503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417163188534709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3346f3c5-6581-4edd-9194-6e811e723925 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.106238715Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=087c04f2-f1e4-45d0-be53-1539ab2b9237 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.106309095Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=087c04f2-f1e4-45d0-be53-1539ab2b9237 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.108241991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=104445ec-f50e-4b1d-9eb2-fefc2a8f9670 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.108655765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714417687108625678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=104445ec-f50e-4b1d-9eb2-fefc2a8f9670 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.109411907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d99c3e8-e8c6-47d9-ab1b-79caad4c65f1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.109493645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d99c3e8-e8c6-47d9-ab1b-79caad4c65f1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.109746312Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417414458881064,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9fee3659bbbc0cfcb39700e786b8abaca5828c3a369213c71f8c24aead35f1,PodSandboxId:7535117780f63199f4d557275f58c4dbd45457c95f56a37f6dc4909ddb1934dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714417187571512441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187512853039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187459474931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a
91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ea9216c1d7c2ce6fc652bc1f2020e90ddd86266e6494480d19d53d424bfc01,PodSandboxId:99a43785ac56c5dd7e66b63e069f2b805e50ab4d83c6949997dd6ae7806b297e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17144171
84995953953,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417184691594429,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45ced81842ab99aabac98f2ac5d6e1b110a73465d11e56c87d6166d153839862,PodSandboxId:092f8bef902efe571a7c4bb49769bc4109d8855d291b7678d17ea4c9ea1e72fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417166093403108,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab58bfc4970fad85a73d065ba4eec99e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417163290366549,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9513857b60ae4b75efae6de6be9d83d589f9d511ba539d01bc7e371a1a0e090,PodSandboxId:d5c792e26a63f5182b337b3916dad1dff032b53207ab9bc1da61cbaee803b342,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417163246853598,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9139aba22c80eaaf47d55790db8284fc4c3d959ba23904a36880d4d936f4622,PodSandboxId:5dc22f2ba00277c3f8923983e3b802392c4264210a68e2e15c1e7fae5c399b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417163227484503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417163188534709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d99c3e8-e8c6-47d9-ab1b-79caad4c65f1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.153198114Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc154370-bff1-4b3b-906a-0adb3c2075f8 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.153322956Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc154370-bff1-4b3b-906a-0adb3c2075f8 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.156074227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea0ca2a4-b400-489f-b15e-c86b10b25c69 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.156507180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714417687156484737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea0ca2a4-b400-489f-b15e-c86b10b25c69 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.157671083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0d2686c4-883d-44fd-bd8c-c5e949602ed7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.157749535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0d2686c4-883d-44fd-bd8c-c5e949602ed7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:08:07 ha-058855 crio[682]: time="2024-04-29 19:08:07.158058126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417414458881064,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db9fee3659bbbc0cfcb39700e786b8abaca5828c3a369213c71f8c24aead35f1,PodSandboxId:7535117780f63199f4d557275f58c4dbd45457c95f56a37f6dc4909ddb1934dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714417187571512441,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187512853039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417187459474931,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a
91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ea9216c1d7c2ce6fc652bc1f2020e90ddd86266e6494480d19d53d424bfc01,PodSandboxId:99a43785ac56c5dd7e66b63e069f2b805e50ab4d83c6949997dd6ae7806b297e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:17144171
84995953953,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417184691594429,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45ced81842ab99aabac98f2ac5d6e1b110a73465d11e56c87d6166d153839862,PodSandboxId:092f8bef902efe571a7c4bb49769bc4109d8855d291b7678d17ea4c9ea1e72fa,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417166093403108,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab58bfc4970fad85a73d065ba4eec99e,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417163290366549,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9513857b60ae4b75efae6de6be9d83d589f9d511ba539d01bc7e371a1a0e090,PodSandboxId:d5c792e26a63f5182b337b3916dad1dff032b53207ab9bc1da61cbaee803b342,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417163246853598,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9139aba22c80eaaf47d55790db8284fc4c3d959ba23904a36880d4d936f4622,PodSandboxId:5dc22f2ba00277c3f8923983e3b802392c4264210a68e2e15c1e7fae5c399b3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417163227484503,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417163188534709,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0d2686c4-883d-44fd-bd8c-c5e949602ed7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3ebcb4aac0715       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   5d6b9a26ffca4       busybox-fc5497c4f-nst7c
	db9fee3659bbb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       0                   7535117780f63       storage-provisioner
	35b38d136f10c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   27fc4fec5e3f0       coredns-7db6d8ff4d-njch8
	db099f7f56f78       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago       Running             coredns                   0                   1050f7bafa98e       coredns-7db6d8ff4d-bbq9x
	38ea9216c1d7c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      8 minutes ago       Running             kindnet-cni               0                   99a43785ac56c       kindnet-j42cd
	2e3b2e1683b77       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      8 minutes ago       Running             kube-proxy                0                   fe7fa96de2987       kube-proxy-xldlc
	45ced81842ab9       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   092f8bef902ef       kube-vip-ha-058855
	3c1cf7e86cc05       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      8 minutes ago       Running             kube-scheduler            0                   eaa9cff42f55b       kube-scheduler-ha-058855
	d9513857b60ae       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      8 minutes ago       Running             kube-controller-manager   0                   d5c792e26a63f       kube-controller-manager-ha-058855
	d9139aba22c80       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      8 minutes ago       Running             kube-apiserver            0                   5dc22f2ba0027       kube-apiserver-ha-058855
	f653b7a6c4efb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   40b3f5ad731ff       etcd-ha-058855
	
	
	==> coredns [35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b] <==
	[INFO] 10.244.2.2:42994 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000095231s
	[INFO] 10.244.0.4:59286 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.018415771s
	[INFO] 10.244.0.4:34309 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225287s
	[INFO] 10.244.0.4:56402 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167209s
	[INFO] 10.244.1.2:40060 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001736403s
	[INFO] 10.244.1.2:46625 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114006s
	[INFO] 10.244.1.2:57265 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118743s
	[INFO] 10.244.1.2:34075 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000376654s
	[INFO] 10.244.1.2:37316 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000287017s
	[INFO] 10.244.2.2:55857 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148708s
	[INFO] 10.244.2.2:34046 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114435s
	[INFO] 10.244.2.2:59123 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013463s
	[INFO] 10.244.0.4:52788 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139069s
	[INFO] 10.244.0.4:54898 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174069s
	[INFO] 10.244.0.4:50441 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004412s
	[INFO] 10.244.1.2:34029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183007s
	[INFO] 10.244.1.2:34413 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011174s
	[INFO] 10.244.1.2:46424 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144489s
	[INFO] 10.244.1.2:35983 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116269s
	[INFO] 10.244.2.2:36513 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000459857s
	[INFO] 10.244.0.4:40033 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000351605s
	[INFO] 10.244.0.4:45496 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128261s
	[INFO] 10.244.1.2:58777 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000204086s
	[INFO] 10.244.2.2:46697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227863s
	[INFO] 10.244.2.2:60992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138077s
	
	
	==> coredns [db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe] <==
	[INFO] 10.244.2.2:38010 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00188289s
	[INFO] 10.244.0.4:49486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160106s
	[INFO] 10.244.0.4:50702 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002836903s
	[INFO] 10.244.0.4:35661 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000120275s
	[INFO] 10.244.0.4:59999 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127179s
	[INFO] 10.244.0.4:38237 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178889s
	[INFO] 10.244.1.2:51028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274871s
	[INFO] 10.244.1.2:44471 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001330026s
	[INFO] 10.244.1.2:42432 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122996s
	[INFO] 10.244.2.2:59580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000294012s
	[INFO] 10.244.2.2:60659 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00179161s
	[INFO] 10.244.2.2:39549 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000317743s
	[INFO] 10.244.2.2:43315 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001176961s
	[INFO] 10.244.2.2:32992 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190177s
	[INFO] 10.244.0.4:46409 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000047581s
	[INFO] 10.244.2.2:53037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141835s
	[INFO] 10.244.2.2:44640 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000203835s
	[INFO] 10.244.2.2:58171 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090591s
	[INFO] 10.244.0.4:44158 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106787s
	[INFO] 10.244.0.4:57643 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000199048s
	[INFO] 10.244.1.2:57285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127384s
	[INFO] 10.244.1.2:53223 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223061s
	[INFO] 10.244.1.2:54113 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106292s
	[INFO] 10.244.2.2:57470 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012081s
	[INFO] 10.244.2.2:35174 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139962s
	
	
	==> describe nodes <==
	Name:               ha-058855
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T18_59_30_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 18:59:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:07:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:04:05 +0000   Mon, 29 Apr 2024 18:59:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:04:05 +0000   Mon, 29 Apr 2024 18:59:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:04:05 +0000   Mon, 29 Apr 2024 18:59:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:04:05 +0000   Mon, 29 Apr 2024 18:59:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    ha-058855
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4dd245ae2fbf4ffeb364af3ff6801808
	  System UUID:                4dd245ae-2fbf-4ffe-b364-af3ff6801808
	  Boot ID:                    41ab0acc-a7d3-4500-bada-adc41451a660
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nst7c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 coredns-7db6d8ff4d-bbq9x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m25s
	  kube-system                 coredns-7db6d8ff4d-njch8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m25s
	  kube-system                 etcd-ha-058855                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m38s
	  kube-system                 kindnet-j42cd                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m25s
	  kube-system                 kube-apiserver-ha-058855             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-controller-manager-ha-058855    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-proxy-xldlc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 kube-scheduler-ha-058855             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-vip-ha-058855                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m22s  kube-proxy       
	  Normal  Starting                 8m38s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m38s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m38s  kubelet          Node ha-058855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m38s  kubelet          Node ha-058855 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m38s  kubelet          Node ha-058855 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m26s  node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Normal  NodeReady                8m21s  kubelet          Node ha-058855 status is now: NodeReady
	  Normal  RegisteredNode           6m2s   node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Normal  RegisteredNode           4m45s  node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	
	
	Name:               ha-058855-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_01_50_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:01:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:04:30 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 19:03:49 +0000   Mon, 29 Apr 2024 19:05:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 19:03:49 +0000   Mon, 29 Apr 2024 19:05:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 19:03:49 +0000   Mon, 29 Apr 2024 19:05:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 19:03:49 +0000   Mon, 29 Apr 2024 19:05:11 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-058855-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea727b7dfb674d998bb0a6c08dea140b
	  System UUID:                ea727b7d-fb67-4d99-8bb0-a6c08dea140b
	  Boot ID:                    990bbec7-ab66-4e93-ab63-93c34ed99031
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pr84n                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 etcd-ha-058855-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m18s
	  kube-system                 kindnet-xdtp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m20s
	  kube-system                 kube-apiserver-ha-058855-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-controller-manager-ha-058855-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-proxy-nz2rv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-scheduler-ha-058855-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-vip-ha-058855-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m20s (x8 over 6m20s)  kubelet          Node ha-058855-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s (x8 over 6m20s)  kubelet          Node ha-058855-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s (x7 over 6m20s)  kubelet          Node ha-058855-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  RegisteredNode           4m45s                  node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  NodeNotReady             2m56s                  node-controller  Node ha-058855-m02 status is now: NodeNotReady
	
	
	Name:               ha-058855-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_03_08_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:03:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:07:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:04:04 +0000   Mon, 29 Apr 2024 19:03:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:04:04 +0000   Mon, 29 Apr 2024 19:03:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:04:04 +0000   Mon, 29 Apr 2024 19:03:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:04:04 +0000   Mon, 29 Apr 2024 19:03:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    ha-058855-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b6bc3a75b3f42f3aa365abccb76fd49
	  System UUID:                5b6bc3a7-5b3f-42f3-aa36-5abccb76fd49
	  Boot ID:                    012bcf6a-21fa-44f5-99a3-07d973e32c6e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xll26                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 etcd-ha-058855-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m2s
	  kube-system                 kindnet-m4fgv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m4s
	  kube-system                 kube-apiserver-ha-058855-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-controller-manager-ha-058855-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-proxy-29svc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-scheduler-ha-058855-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-vip-ha-058855-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m4s (x8 over 5m4s)  kubelet          Node ha-058855-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s (x8 over 5m4s)  kubelet          Node ha-058855-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s (x7 over 5m4s)  kubelet          Node ha-058855-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-058855-m03 event: Registered Node ha-058855-m03 in Controller
	  Normal  RegisteredNode           5m1s                 node-controller  Node ha-058855-m03 event: Registered Node ha-058855-m03 in Controller
	  Normal  RegisteredNode           4m45s                node-controller  Node ha-058855-m03 event: Registered Node ha-058855-m03 in Controller
	
	
	Name:               ha-058855-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_04_09_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:04:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:08:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:04:38 +0000   Mon, 29 Apr 2024 19:04:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:04:38 +0000   Mon, 29 Apr 2024 19:04:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:04:38 +0000   Mon, 29 Apr 2024 19:04:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:04:38 +0000   Mon, 29 Apr 2024 19:04:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    ha-058855-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbc9ec7037144061a802010c8eaa7400
	  System UUID:                fbc9ec70-3714-4061-a802-010c8eaa7400
	  Boot ID:                    78cd3cac-98fc-427e-a5a6-f22c652ad17c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8mzbn       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m59s
	  kube-system                 kube-proxy-7qjvk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                 From             Message
	  ----    ------                   ----                ----             -------
	  Normal  Starting                 3m54s               kube-proxy       
	  Normal  NodeHasSufficientMemory  3m59s (x3 over 4m)  kubelet          Node ha-058855-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s (x3 over 4m)  kubelet          Node ha-058855-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s (x3 over 4m)  kubelet          Node ha-058855-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m59s               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m57s               node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal  RegisteredNode           3m56s               node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal  RegisteredNode           3m55s               node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal  NodeReady                3m49s               kubelet          Node ha-058855-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr29 18:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053006] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043670] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.664189] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.502838] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Apr29 18:59] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.235737] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.063053] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066472] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.176661] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.148881] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.312890] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.946074] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.072175] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.019108] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +1.004098] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.172368] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +0.079206] kauditd_printk_skb: 30 callbacks suppressed
	[ +15.239291] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.268922] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067] <==
	{"level":"warn","ts":"2024-04-29T19:08:07.460885Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.46624Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.472954Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.493136Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.495159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.504109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.51288Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.518552Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.523068Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.528047Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.546008Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.569399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.57546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.583728Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.58769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.603681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.61083Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.613383Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.621193Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.625567Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.628599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.634549Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.641365Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.64855Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-29T19:08:07.709886Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3baf479dc31b93a9","from":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:08:07 up 9 min,  0 users,  load average: 0.32, 0.31, 0.16
	Linux ha-058855 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [38ea9216c1d7c2ce6fc652bc1f2020e90ddd86266e6494480d19d53d424bfc01] <==
	I0429 19:07:37.276151       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:07:47.295342       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:07:47.295445       1 main.go:227] handling current node
	I0429 19:07:47.295473       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:07:47.295495       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:07:47.295641       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0429 19:07:47.295666       1 main.go:250] Node ha-058855-m03 has CIDR [10.244.2.0/24] 
	I0429 19:07:47.295734       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:07:47.295834       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:07:57.305263       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:07:57.305425       1 main.go:227] handling current node
	I0429 19:07:57.305456       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:07:57.305554       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:07:57.305987       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0429 19:07:57.306105       1 main.go:250] Node ha-058855-m03 has CIDR [10.244.2.0/24] 
	I0429 19:07:57.306203       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:07:57.306225       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:08:07.327149       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:08:07.327175       1 main.go:227] handling current node
	I0429 19:08:07.327197       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:08:07.327201       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:08:07.327377       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0429 19:08:07.327386       1 main.go:250] Node ha-058855-m03 has CIDR [10.244.2.0/24] 
	I0429 19:08:07.327432       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:08:07.327437       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d9139aba22c80eaaf47d55790db8284fc4c3d959ba23904a36880d4d936f4622] <==
	I0429 18:59:28.390043       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 18:59:28.407855       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.52]
	I0429 18:59:28.408978       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 18:59:28.410947       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 18:59:28.417872       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 18:59:29.459355       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 18:59:29.479068       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 18:59:29.655589       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 18:59:42.127669       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 18:59:42.419931       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 19:03:35.585563       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38490: use of closed network connection
	E0429 19:03:35.807350       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38508: use of closed network connection
	E0429 19:03:36.039102       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38524: use of closed network connection
	E0429 19:03:36.271511       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38544: use of closed network connection
	E0429 19:03:36.492521       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38574: use of closed network connection
	E0429 19:03:36.713236       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38592: use of closed network connection
	E0429 19:03:36.917523       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38602: use of closed network connection
	E0429 19:03:37.139649       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38632: use of closed network connection
	E0429 19:03:37.355222       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38650: use of closed network connection
	E0429 19:03:37.703754       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38686: use of closed network connection
	E0429 19:03:37.912743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38696: use of closed network connection
	E0429 19:03:38.127598       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38728: use of closed network connection
	E0429 19:03:38.327424       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38744: use of closed network connection
	E0429 19:03:38.549472       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38772: use of closed network connection
	E0429 19:03:38.764153       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38782: use of closed network connection
	
	
	==> kube-controller-manager [d9513857b60ae4b75efae6de6be9d83d589f9d511ba539d01bc7e371a1a0e090] <==
	I0429 19:03:29.663258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="228.25256ms"
	E0429 19:03:29.663322       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0429 19:03:29.760748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.384701ms"
	I0429 19:03:29.760911       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.248µs"
	I0429 19:03:31.024689       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.2µs"
	I0429 19:03:31.038397       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.478µs"
	I0429 19:03:31.055444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.764µs"
	I0429 19:03:31.074101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="171.866µs"
	I0429 19:03:31.079683       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.981µs"
	I0429 19:03:31.096642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.776µs"
	I0429 19:03:33.638022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.907954ms"
	I0429 19:03:33.638229       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.372µs"
	I0429 19:03:34.830238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.188601ms"
	I0429 19:03:34.830386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.355µs"
	I0429 19:03:35.050992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.136317ms"
	I0429 19:03:35.051123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="64.617µs"
	E0429 19:04:07.911968       1 certificate_controller.go:146] Sync csr-22bt5 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-22bt5": the object has been modified; please apply your changes to the latest version and try again
	E0429 19:04:08.174118       1 certificate_controller.go:146] Sync csr-22bt5 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-22bt5": the object has been modified; please apply your changes to the latest version and try again
	I0429 19:04:08.229381       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-058855-m04\" does not exist"
	I0429 19:04:08.315881       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-058855-m04" podCIDRs=["10.244.3.0/24"]
	I0429 19:04:11.763753       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-058855-m04"
	I0429 19:04:18.919387       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-058855-m04"
	I0429 19:05:11.789436       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-058855-m04"
	I0429 19:05:11.967898       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.348112ms"
	I0429 19:05:11.968148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.897µs"
	
	
	==> kube-proxy [2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5] <==
	I0429 18:59:44.874421       1 server_linux.go:69] "Using iptables proxy"
	I0429 18:59:44.884463       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.52"]
	I0429 18:59:44.940495       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 18:59:44.940581       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 18:59:44.940611       1 server_linux.go:165] "Using iptables Proxier"
	I0429 18:59:44.947719       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 18:59:44.948063       1 server.go:872] "Version info" version="v1.30.0"
	I0429 18:59:44.948102       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 18:59:44.950174       1 config.go:192] "Starting service config controller"
	I0429 18:59:44.950198       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 18:59:44.950218       1 config.go:101] "Starting endpoint slice config controller"
	I0429 18:59:44.950221       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 18:59:44.950870       1 config.go:319] "Starting node config controller"
	I0429 18:59:44.950879       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 18:59:45.050536       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 18:59:45.050601       1 shared_informer.go:320] Caches are synced for service config
	I0429 18:59:45.050926       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad] <==
	W0429 18:59:27.678007       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 18:59:27.678057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 18:59:27.708155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 18:59:27.708516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 18:59:27.769910       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 18:59:27.770033       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 18:59:27.789498       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 18:59:27.789723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 18:59:27.814415       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 18:59:27.815351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 18:59:27.847043       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 18:59:27.847480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 18:59:29.764635       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 19:03:03.809276       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-29svc\": pod kube-proxy-29svc is already assigned to node \"ha-058855-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-29svc" node="ha-058855-m03"
	E0429 19:03:03.809567       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1c889e3e-7390-4e06-8bf3-424117496b4b(kube-system/kube-proxy-29svc) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-29svc"
	E0429 19:03:03.809611       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-29svc\": pod kube-proxy-29svc is already assigned to node \"ha-058855-m03\"" pod="kube-system/kube-proxy-29svc"
	I0429 19:03:03.809678       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-29svc" node="ha-058855-m03"
	E0429 19:03:29.257363       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pr84n\": pod busybox-fc5497c4f-pr84n is already assigned to node \"ha-058855-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pr84n" node="ha-058855-m03"
	E0429 19:03:29.257496       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pr84n\": pod busybox-fc5497c4f-pr84n is already assigned to node \"ha-058855-m02\"" pod="default/busybox-fc5497c4f-pr84n"
	E0429 19:04:08.343596       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8mzbn\": pod kindnet-8mzbn is already assigned to node \"ha-058855-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-8mzbn" node="ha-058855-m04"
	E0429 19:04:08.343733       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8mzbn\": pod kindnet-8mzbn is already assigned to node \"ha-058855-m04\"" pod="kube-system/kindnet-8mzbn"
	E0429 19:04:08.353249       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-7qjvk\": pod kube-proxy-7qjvk is already assigned to node \"ha-058855-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-7qjvk" node="ha-058855-m04"
	E0429 19:04:08.353339       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ff88d6a4-0fb7-4aa1-afb1-808659755020(kube-system/kube-proxy-7qjvk) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-7qjvk"
	E0429 19:04:08.353361       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-7qjvk\": pod kube-proxy-7qjvk is already assigned to node \"ha-058855-m04\"" pod="kube-system/kube-proxy-7qjvk"
	I0429 19:04:08.353381       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-7qjvk" node="ha-058855-m04"
	
	
	==> kubelet <==
	Apr 29 19:03:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:03:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:03:30 ha-058855 kubelet[1376]: E0429 19:03:30.559495    1376 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 19:03:30 ha-058855 kubelet[1376]: E0429 19:03:30.559554    1376 projected.go:200] Error preparing data for projected volume kube-api-access-25pmm for pod default/busybox-fc5497c4f-nst7c: failed to sync configmap cache: timed out waiting for the condition
	Apr 29 19:03:30 ha-058855 kubelet[1376]: E0429 19:03:30.559696    1376 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e810c83c-cdd7-4072-b8e8-319fd5aa4daa-kube-api-access-25pmm podName:e810c83c-cdd7-4072-b8e8-319fd5aa4daa nodeName:}" failed. No retries permitted until 2024-04-29 19:03:31.059646288 +0000 UTC m=+241.631094052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-25pmm" (UniqueName: "kubernetes.io/projected/e810c83c-cdd7-4072-b8e8-319fd5aa4daa-kube-api-access-25pmm") pod "busybox-fc5497c4f-nst7c" (UID: "e810c83c-cdd7-4072-b8e8-319fd5aa4daa") : failed to sync configmap cache: timed out waiting for the condition
	Apr 29 19:04:29 ha-058855 kubelet[1376]: E0429 19:04:29.601992    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:04:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:04:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:04:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:04:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:05:29 ha-058855 kubelet[1376]: E0429 19:05:29.604695    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:05:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:05:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:05:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:05:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:06:29 ha-058855 kubelet[1376]: E0429 19:06:29.605485    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:06:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:06:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:06:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:06:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:07:29 ha-058855 kubelet[1376]: E0429 19:07:29.599613    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:07:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:07:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:07:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:07:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-058855 -n ha-058855
helpers_test.go:261: (dbg) Run:  kubectl --context ha-058855 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (62.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (408.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-058855 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-058855 -v=7 --alsologtostderr
E0429 19:08:16.597835   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:09:00.893847   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-058855 -v=7 --alsologtostderr: exit status 82 (2m2.733669281s)

                                                
                                                
-- stdout --
	* Stopping node "ha-058855-m04"  ...
	* Stopping node "ha-058855-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:08:09.223937   35968 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:08:09.224068   35968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:08:09.224080   35968 out.go:304] Setting ErrFile to fd 2...
	I0429 19:08:09.224091   35968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:08:09.224334   35968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:08:09.224564   35968 out.go:298] Setting JSON to false
	I0429 19:08:09.224642   35968 mustload.go:65] Loading cluster: ha-058855
	I0429 19:08:09.225023   35968 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:08:09.225120   35968 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 19:08:09.225293   35968 mustload.go:65] Loading cluster: ha-058855
	I0429 19:08:09.225428   35968 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:08:09.225451   35968 stop.go:39] StopHost: ha-058855-m04
	I0429 19:08:09.225796   35968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:08:09.225833   35968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:08:09.242219   35968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41225
	I0429 19:08:09.242676   35968 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:08:09.243273   35968 main.go:141] libmachine: Using API Version  1
	I0429 19:08:09.243305   35968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:08:09.243634   35968 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:08:09.246292   35968 out.go:177] * Stopping node "ha-058855-m04"  ...
	I0429 19:08:09.247891   35968 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 19:08:09.247926   35968 main.go:141] libmachine: (ha-058855-m04) Calling .DriverName
	I0429 19:08:09.248176   35968 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 19:08:09.248213   35968 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHHostname
	I0429 19:08:09.251054   35968 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:08:09.251443   35968 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:03:55 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:08:09.251483   35968 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:08:09.251573   35968 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHPort
	I0429 19:08:09.251741   35968 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHKeyPath
	I0429 19:08:09.251910   35968 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHUsername
	I0429 19:08:09.252057   35968 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m04/id_rsa Username:docker}
	I0429 19:08:09.345149   35968 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 19:08:09.401183   35968 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 19:08:09.456747   35968 main.go:141] libmachine: Stopping "ha-058855-m04"...
	I0429 19:08:09.456786   35968 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:08:09.458449   35968 main.go:141] libmachine: (ha-058855-m04) Calling .Stop
	I0429 19:08:09.462254   35968 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 0/120
	I0429 19:08:10.464416   35968 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 1/120
	I0429 19:08:11.466134   35968 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:08:11.467234   35968 main.go:141] libmachine: Machine "ha-058855-m04" was stopped.
	I0429 19:08:11.467256   35968 stop.go:75] duration metric: took 2.219366094s to stop
	I0429 19:08:11.467277   35968 stop.go:39] StopHost: ha-058855-m03
	I0429 19:08:11.467549   35968 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:08:11.467586   35968 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:08:11.482219   35968 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37689
	I0429 19:08:11.482678   35968 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:08:11.483246   35968 main.go:141] libmachine: Using API Version  1
	I0429 19:08:11.483275   35968 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:08:11.483621   35968 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:08:11.486050   35968 out.go:177] * Stopping node "ha-058855-m03"  ...
	I0429 19:08:11.487404   35968 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 19:08:11.487427   35968 main.go:141] libmachine: (ha-058855-m03) Calling .DriverName
	I0429 19:08:11.487635   35968 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 19:08:11.487654   35968 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHHostname
	I0429 19:08:11.490687   35968 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:08:11.491175   35968 main.go:141] libmachine: (ha-058855-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:23:56", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:02:24 +0000 UTC Type:0 Mac:52:54:00:78:23:56 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-058855-m03 Clientid:01:52:54:00:78:23:56}
	I0429 19:08:11.491198   35968 main.go:141] libmachine: (ha-058855-m03) DBG | domain ha-058855-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:78:23:56 in network mk-ha-058855
	I0429 19:08:11.491370   35968 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHPort
	I0429 19:08:11.491535   35968 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHKeyPath
	I0429 19:08:11.491682   35968 main.go:141] libmachine: (ha-058855-m03) Calling .GetSSHUsername
	I0429 19:08:11.491821   35968 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m03/id_rsa Username:docker}
	I0429 19:08:11.583472   35968 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 19:08:11.639553   35968 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 19:08:11.700931   35968 main.go:141] libmachine: Stopping "ha-058855-m03"...
	I0429 19:08:11.700959   35968 main.go:141] libmachine: (ha-058855-m03) Calling .GetState
	I0429 19:08:11.702383   35968 main.go:141] libmachine: (ha-058855-m03) Calling .Stop
	I0429 19:08:11.706125   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 0/120
	I0429 19:08:12.707317   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 1/120
	I0429 19:08:13.708846   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 2/120
	I0429 19:08:14.710214   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 3/120
	I0429 19:08:15.711730   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 4/120
	I0429 19:08:16.713196   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 5/120
	I0429 19:08:17.714301   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 6/120
	I0429 19:08:18.715971   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 7/120
	I0429 19:08:19.717517   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 8/120
	I0429 19:08:20.719262   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 9/120
	I0429 19:08:21.721202   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 10/120
	I0429 19:08:22.723091   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 11/120
	I0429 19:08:23.724848   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 12/120
	I0429 19:08:24.726378   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 13/120
	I0429 19:08:25.727925   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 14/120
	I0429 19:08:26.729850   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 15/120
	I0429 19:08:27.731611   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 16/120
	I0429 19:08:28.733091   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 17/120
	I0429 19:08:29.735180   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 18/120
	I0429 19:08:30.736527   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 19/120
	I0429 19:08:31.738225   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 20/120
	I0429 19:08:32.740618   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 21/120
	I0429 19:08:33.742330   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 22/120
	I0429 19:08:34.743758   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 23/120
	I0429 19:08:35.745382   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 24/120
	I0429 19:08:36.747241   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 25/120
	I0429 19:08:37.749851   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 26/120
	I0429 19:08:38.751224   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 27/120
	I0429 19:08:39.752736   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 28/120
	I0429 19:08:40.754006   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 29/120
	I0429 19:08:41.755823   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 30/120
	I0429 19:08:42.757295   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 31/120
	I0429 19:08:43.758729   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 32/120
	I0429 19:08:44.760125   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 33/120
	I0429 19:08:45.761415   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 34/120
	I0429 19:08:46.762643   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 35/120
	I0429 19:08:47.763913   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 36/120
	I0429 19:08:48.765092   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 37/120
	I0429 19:08:49.766446   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 38/120
	I0429 19:08:50.768145   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 39/120
	I0429 19:08:51.769948   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 40/120
	I0429 19:08:52.771386   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 41/120
	I0429 19:08:53.772700   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 42/120
	I0429 19:08:54.774016   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 43/120
	I0429 19:08:55.775293   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 44/120
	I0429 19:08:56.777054   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 45/120
	I0429 19:08:57.778472   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 46/120
	I0429 19:08:58.780482   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 47/120
	I0429 19:08:59.781809   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 48/120
	I0429 19:09:00.783366   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 49/120
	I0429 19:09:01.784986   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 50/120
	I0429 19:09:02.786351   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 51/120
	I0429 19:09:03.787689   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 52/120
	I0429 19:09:04.789502   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 53/120
	I0429 19:09:05.790807   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 54/120
	I0429 19:09:06.792672   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 55/120
	I0429 19:09:07.793911   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 56/120
	I0429 19:09:08.795249   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 57/120
	I0429 19:09:09.796913   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 58/120
	I0429 19:09:10.799212   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 59/120
	I0429 19:09:11.800933   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 60/120
	I0429 19:09:12.802287   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 61/120
	I0429 19:09:13.803597   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 62/120
	I0429 19:09:14.804956   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 63/120
	I0429 19:09:15.806262   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 64/120
	I0429 19:09:16.807963   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 65/120
	I0429 19:09:17.809516   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 66/120
	I0429 19:09:18.810957   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 67/120
	I0429 19:09:19.812675   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 68/120
	I0429 19:09:20.813975   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 69/120
	I0429 19:09:21.816163   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 70/120
	I0429 19:09:22.817437   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 71/120
	I0429 19:09:23.818734   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 72/120
	I0429 19:09:24.819903   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 73/120
	I0429 19:09:25.821110   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 74/120
	I0429 19:09:26.822773   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 75/120
	I0429 19:09:27.824128   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 76/120
	I0429 19:09:28.825283   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 77/120
	I0429 19:09:29.826653   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 78/120
	I0429 19:09:30.828012   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 79/120
	I0429 19:09:31.829416   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 80/120
	I0429 19:09:32.830636   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 81/120
	I0429 19:09:33.832987   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 82/120
	I0429 19:09:34.834343   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 83/120
	I0429 19:09:35.835910   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 84/120
	I0429 19:09:36.837546   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 85/120
	I0429 19:09:37.839047   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 86/120
	I0429 19:09:38.840677   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 87/120
	I0429 19:09:39.841893   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 88/120
	I0429 19:09:40.843214   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 89/120
	I0429 19:09:41.845025   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 90/120
	I0429 19:09:42.846325   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 91/120
	I0429 19:09:43.847696   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 92/120
	I0429 19:09:44.848982   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 93/120
	I0429 19:09:45.850335   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 94/120
	I0429 19:09:46.851994   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 95/120
	I0429 19:09:47.853236   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 96/120
	I0429 19:09:48.854460   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 97/120
	I0429 19:09:49.855715   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 98/120
	I0429 19:09:50.856797   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 99/120
	I0429 19:09:51.859006   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 100/120
	I0429 19:09:52.860583   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 101/120
	I0429 19:09:53.862031   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 102/120
	I0429 19:09:54.863591   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 103/120
	I0429 19:09:55.865059   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 104/120
	I0429 19:09:56.866732   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 105/120
	I0429 19:09:57.868209   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 106/120
	I0429 19:09:58.869844   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 107/120
	I0429 19:09:59.871284   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 108/120
	I0429 19:10:00.872708   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 109/120
	I0429 19:10:01.874406   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 110/120
	I0429 19:10:02.876001   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 111/120
	I0429 19:10:03.877342   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 112/120
	I0429 19:10:04.878835   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 113/120
	I0429 19:10:05.880236   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 114/120
	I0429 19:10:06.881816   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 115/120
	I0429 19:10:07.883364   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 116/120
	I0429 19:10:08.884867   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 117/120
	I0429 19:10:09.886125   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 118/120
	I0429 19:10:10.887450   35968 main.go:141] libmachine: (ha-058855-m03) Waiting for machine to stop 119/120
	I0429 19:10:11.888379   35968 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0429 19:10:11.888439   35968 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0429 19:10:11.890901   35968 out.go:177] 
	W0429 19:10:11.892602   35968 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0429 19:10:11.892624   35968 out.go:239] * 
	* 
	W0429 19:10:11.894793   35968 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 19:10:11.896220   35968 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-058855 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-058855 --wait=true -v=7 --alsologtostderr
E0429 19:10:23.946682   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 19:12:48.914853   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:14:00.893470   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-058855 --wait=true -v=7 --alsologtostderr: (4m42.878718537s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-058855
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-058855 -n ha-058855
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-058855 logs -n 25: (2.143070699s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m02:/home/docker/cp-test_ha-058855-m03_ha-058855-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m02 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m03_ha-058855-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04:/home/docker/cp-test_ha-058855-m03_ha-058855-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m04 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m03_ha-058855-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-058855 cp testdata/cp-test.txt                                                | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1826286980/001/cp-test_ha-058855-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855:/home/docker/cp-test_ha-058855-m04_ha-058855.txt                       |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855 sudo cat                                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m04_ha-058855.txt                                 |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m02:/home/docker/cp-test_ha-058855-m04_ha-058855-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m02 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m04_ha-058855-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03:/home/docker/cp-test_ha-058855-m04_ha-058855-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m03 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m04_ha-058855-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-058855 node stop m02 -v=7                                                     | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-058855 node start m02 -v=7                                                    | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-058855 -v=7                                                           | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-058855 -v=7                                                                | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-058855 --wait=true -v=7                                                    | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:10 UTC | 29 Apr 24 19:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-058855                                                                | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:14 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 19:10:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 19:10:11.959403   37131 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:10:11.959544   37131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:10:11.959558   37131 out.go:304] Setting ErrFile to fd 2...
	I0429 19:10:11.959580   37131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:10:11.959792   37131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:10:11.960337   37131 out.go:298] Setting JSON to false
	I0429 19:10:11.961341   37131 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3110,"bootTime":1714414702,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 19:10:11.961404   37131 start.go:139] virtualization: kvm guest
	I0429 19:10:11.963766   37131 out.go:177] * [ha-058855] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 19:10:11.965451   37131 notify.go:220] Checking for updates...
	I0429 19:10:11.965462   37131 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:10:11.967025   37131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:10:11.968509   37131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:10:11.969814   37131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:10:11.971109   37131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 19:10:11.972470   37131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:10:11.974405   37131 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:10:11.974553   37131 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:10:11.975119   37131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:10:11.975173   37131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:10:11.989975   37131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44421
	I0429 19:10:11.990440   37131 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:10:11.991047   37131 main.go:141] libmachine: Using API Version  1
	I0429 19:10:11.991075   37131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:10:11.991488   37131 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:10:11.991678   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:10:12.029107   37131 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 19:10:12.030413   37131 start.go:297] selected driver: kvm2
	I0429 19:10:12.030425   37131 start.go:901] validating driver "kvm2" against &{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:10:12.030551   37131 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:10:12.030856   37131 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:10:12.030923   37131 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 19:10:12.046138   37131 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 19:10:12.047024   37131 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:10:12.047108   37131 cni.go:84] Creating CNI manager for ""
	I0429 19:10:12.047127   37131 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0429 19:10:12.047207   37131 start.go:340] cluster config:
	{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:10:12.047415   37131 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:10:12.049397   37131 out.go:177] * Starting "ha-058855" primary control-plane node in "ha-058855" cluster
	I0429 19:10:12.050731   37131 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:10:12.050776   37131 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 19:10:12.050790   37131 cache.go:56] Caching tarball of preloaded images
	I0429 19:10:12.050875   37131 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 19:10:12.050885   37131 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 19:10:12.051032   37131 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 19:10:12.051303   37131 start.go:360] acquireMachinesLock for ha-058855: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:10:12.051354   37131 start.go:364] duration metric: took 26.841µs to acquireMachinesLock for "ha-058855"
	I0429 19:10:12.051376   37131 start.go:96] Skipping create...Using existing machine configuration
	I0429 19:10:12.051384   37131 fix.go:54] fixHost starting: 
	I0429 19:10:12.051633   37131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:10:12.051663   37131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:10:12.066341   37131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I0429 19:10:12.066799   37131 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:10:12.067280   37131 main.go:141] libmachine: Using API Version  1
	I0429 19:10:12.067304   37131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:10:12.067723   37131 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:10:12.068017   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:10:12.068255   37131 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:10:12.070340   37131 fix.go:112] recreateIfNeeded on ha-058855: state=Running err=<nil>
	W0429 19:10:12.070376   37131 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 19:10:12.072347   37131 out.go:177] * Updating the running kvm2 "ha-058855" VM ...
	I0429 19:10:12.073574   37131 machine.go:94] provisionDockerMachine start ...
	I0429 19:10:12.073597   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:10:12.073839   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:10:12.076533   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.076984   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.077026   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.077153   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:10:12.077324   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.077485   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.077603   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:10:12.077808   37131 main.go:141] libmachine: Using SSH client type: native
	I0429 19:10:12.078136   37131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 19:10:12.078155   37131 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:10:12.196094   37131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-058855
	
	I0429 19:10:12.196124   37131 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 19:10:12.196362   37131 buildroot.go:166] provisioning hostname "ha-058855"
	I0429 19:10:12.196383   37131 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 19:10:12.196579   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:10:12.199004   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.199382   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.199406   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.199587   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:10:12.199770   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.199933   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.200069   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:10:12.200655   37131 main.go:141] libmachine: Using SSH client type: native
	I0429 19:10:12.200962   37131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 19:10:12.201016   37131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-058855 && echo "ha-058855" | sudo tee /etc/hostname
	I0429 19:10:12.338170   37131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-058855
	
	I0429 19:10:12.338200   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:10:12.341036   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.341529   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.341554   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.341766   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:10:12.341962   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.342192   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.342366   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:10:12.342523   37131 main.go:141] libmachine: Using SSH client type: native
	I0429 19:10:12.342679   37131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 19:10:12.342695   37131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-058855' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-058855/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-058855' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:10:12.455859   37131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:10:12.455894   37131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:10:12.455944   37131 buildroot.go:174] setting up certificates
	I0429 19:10:12.455962   37131 provision.go:84] configureAuth start
	I0429 19:10:12.455980   37131 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 19:10:12.456321   37131 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:10:12.459120   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.459569   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.459599   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.459761   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:10:12.462211   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.462531   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.462583   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.462717   37131 provision.go:143] copyHostCerts
	I0429 19:10:12.462743   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:10:12.462776   37131 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:10:12.462785   37131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:10:12.462846   37131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:10:12.462931   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:10:12.462948   37131 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:10:12.462954   37131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:10:12.462977   37131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:10:12.463030   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:10:12.463045   37131 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:10:12.463049   37131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:10:12.463069   37131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:10:12.463158   37131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.ha-058855 san=[127.0.0.1 192.168.39.52 ha-058855 localhost minikube]
	I0429 19:10:12.575702   37131 provision.go:177] copyRemoteCerts
	I0429 19:10:12.575761   37131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:10:12.575783   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:10:12.578598   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.578963   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.578992   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.579190   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:10:12.579379   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.579512   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:10:12.579665   37131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:10:12.671290   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 19:10:12.671353   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:10:12.703502   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 19:10:12.703571   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 19:10:12.733519   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 19:10:12.733590   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:10:12.762793   37131 provision.go:87] duration metric: took 306.815027ms to configureAuth
	I0429 19:10:12.762824   37131 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:10:12.763079   37131 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:10:12.763161   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:10:12.766137   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.766553   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.766574   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.766844   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:10:12.767029   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.767189   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.767405   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:10:12.767561   37131 main.go:141] libmachine: Using SSH client type: native
	I0429 19:10:12.767751   37131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 19:10:12.767781   37131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:11:43.782036   37131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:11:43.782095   37131 machine.go:97] duration metric: took 1m31.708503981s to provisionDockerMachine
	I0429 19:11:43.782110   37131 start.go:293] postStartSetup for "ha-058855" (driver="kvm2")
	I0429 19:11:43.782123   37131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:11:43.782149   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:11:43.782521   37131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:11:43.782551   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:11:43.785555   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:43.786050   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:43.786099   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:43.786251   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:11:43.786480   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:11:43.786655   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:11:43.786815   37131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:11:43.875377   37131 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:11:43.880400   37131 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:11:43.880430   37131 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:11:43.880510   37131 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:11:43.880596   37131 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:11:43.880611   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /etc/ssl/certs/151242.pem
	I0429 19:11:43.880692   37131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:11:43.892419   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:11:43.919920   37131 start.go:296] duration metric: took 137.794993ms for postStartSetup
	I0429 19:11:43.919974   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:11:43.920302   37131 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0429 19:11:43.920327   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:11:43.922870   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:43.923308   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:43.923334   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:43.923470   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:11:43.923659   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:11:43.923794   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:11:43.923910   37131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	W0429 19:11:44.010726   37131 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0429 19:11:44.010757   37131 fix.go:56] duration metric: took 1m31.959371993s for fixHost
	I0429 19:11:44.010779   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:11:44.013493   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.013802   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:44.013825   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.014016   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:11:44.014232   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:11:44.014401   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:11:44.014520   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:11:44.014659   37131 main.go:141] libmachine: Using SSH client type: native
	I0429 19:11:44.014838   37131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 19:11:44.014851   37131 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:11:44.127408   37131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714417904.090718173
	
	I0429 19:11:44.127431   37131 fix.go:216] guest clock: 1714417904.090718173
	I0429 19:11:44.127439   37131 fix.go:229] Guest: 2024-04-29 19:11:44.090718173 +0000 UTC Remote: 2024-04-29 19:11:44.010765189 +0000 UTC m=+92.104756440 (delta=79.952984ms)
	I0429 19:11:44.127489   37131 fix.go:200] guest clock delta is within tolerance: 79.952984ms
	I0429 19:11:44.127495   37131 start.go:83] releasing machines lock for "ha-058855", held for 1m32.076131381s
	I0429 19:11:44.127512   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:11:44.127783   37131 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:11:44.130490   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.130842   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:44.130869   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.130981   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:11:44.131519   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:11:44.131693   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:11:44.131751   37131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:11:44.131793   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:11:44.131886   37131 ssh_runner.go:195] Run: cat /version.json
	I0429 19:11:44.131910   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:11:44.134359   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.134658   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.134761   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:44.134813   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.134898   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:11:44.135078   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:11:44.135113   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:44.135136   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.135239   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:11:44.135294   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:11:44.135391   37131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:11:44.135452   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:11:44.135575   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:11:44.135728   37131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:11:44.215552   37131 ssh_runner.go:195] Run: systemctl --version
	I0429 19:11:44.248100   37131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:11:44.413843   37131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:11:44.423032   37131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:11:44.423107   37131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:11:44.433461   37131 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 19:11:44.433490   37131 start.go:494] detecting cgroup driver to use...
	I0429 19:11:44.433545   37131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:11:44.451549   37131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:11:44.468006   37131 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:11:44.468072   37131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:11:44.482338   37131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:11:44.496879   37131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:11:44.647500   37131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:11:44.817114   37131 docker.go:233] disabling docker service ...
	I0429 19:11:44.817199   37131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:11:44.840275   37131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:11:44.857077   37131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:11:45.017083   37131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:11:45.173348   37131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:11:45.190692   37131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:11:45.212518   37131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 19:11:45.212578   37131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.224857   37131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:11:45.224932   37131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.237597   37131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.250437   37131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.263192   37131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:11:45.276393   37131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.290240   37131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.302922   37131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.316757   37131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:11:45.328755   37131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:11:45.339973   37131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:11:45.495024   37131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:11:50.910739   37131 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.415677431s)
	I0429 19:11:50.910773   37131 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:11:50.910828   37131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:11:50.916703   37131 start.go:562] Will wait 60s for crictl version
	I0429 19:11:50.916757   37131 ssh_runner.go:195] Run: which crictl
	I0429 19:11:50.921257   37131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:11:50.974084   37131 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:11:50.974153   37131 ssh_runner.go:195] Run: crio --version
	I0429 19:11:51.012909   37131 ssh_runner.go:195] Run: crio --version
	I0429 19:11:51.052247   37131 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 19:11:51.053873   37131 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:11:51.056690   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:51.057028   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:51.057050   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:51.057243   37131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 19:11:51.062814   37131 kubeadm.go:877] updating cluster {Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 19:11:51.062948   37131 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:11:51.063003   37131 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:11:51.116925   37131 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 19:11:51.116948   37131 crio.go:433] Images already preloaded, skipping extraction
	I0429 19:11:51.117001   37131 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:11:51.159723   37131 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 19:11:51.159746   37131 cache_images.go:84] Images are preloaded, skipping loading
	I0429 19:11:51.159755   37131 kubeadm.go:928] updating node { 192.168.39.52 8443 v1.30.0 crio true true} ...
	I0429 19:11:51.159855   37131 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-058855 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:11:51.159920   37131 ssh_runner.go:195] Run: crio config
	I0429 19:11:51.222237   37131 cni.go:84] Creating CNI manager for ""
	I0429 19:11:51.222258   37131 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0429 19:11:51.222268   37131 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 19:11:51.222288   37131 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.52 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-058855 NodeName:ha-058855 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 19:11:51.222422   37131 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-058855"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 19:11:51.222441   37131 kube-vip.go:115] generating kube-vip config ...
	I0429 19:11:51.222480   37131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 19:11:51.235971   37131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 19:11:51.236098   37131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 19:11:51.236153   37131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:11:51.247404   37131 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 19:11:51.247498   37131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 19:11:51.259260   37131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 19:11:51.279278   37131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:11:51.296770   37131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0429 19:11:51.315766   37131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0429 19:11:51.335464   37131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 19:11:51.341180   37131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:11:51.505378   37131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:11:51.523230   37131 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855 for IP: 192.168.39.52
	I0429 19:11:51.523251   37131 certs.go:194] generating shared ca certs ...
	I0429 19:11:51.523265   37131 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:11:51.523431   37131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:11:51.523498   37131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:11:51.523512   37131 certs.go:256] generating profile certs ...
	I0429 19:11:51.523600   37131 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key
	I0429 19:11:51.523637   37131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.b5b24e72
	I0429 19:11:51.523658   37131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.b5b24e72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.52 192.168.39.27 192.168.39.215 192.168.39.254]
	I0429 19:11:52.043059   37131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.b5b24e72 ...
	I0429 19:11:52.043088   37131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.b5b24e72: {Name:mk2d26705800526e7e28daf478b103ebbe86ff77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:11:52.043250   37131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.b5b24e72 ...
	I0429 19:11:52.043279   37131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.b5b24e72: {Name:mk9e91764a777ba5e6b2e2f3d743a8444b123491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:11:52.043355   37131 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.b5b24e72 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt
	I0429 19:11:52.043507   37131 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.b5b24e72 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key
	I0429 19:11:52.043637   37131 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key
	I0429 19:11:52.043652   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:11:52.043664   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:11:52.043677   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:11:52.043689   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:11:52.043701   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:11:52.043713   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:11:52.043730   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:11:52.043751   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:11:52.043802   37131 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:11:52.043829   37131 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:11:52.043839   37131 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:11:52.043860   37131 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:11:52.043886   37131 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:11:52.043906   37131 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:11:52.043940   37131 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:11:52.043965   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem -> /usr/share/ca-certificates/15124.pem
	I0429 19:11:52.043978   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /usr/share/ca-certificates/151242.pem
	I0429 19:11:52.043990   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:11:52.044571   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:11:52.076136   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:11:52.104116   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:11:52.131069   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:11:52.159412   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0429 19:11:52.188172   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 19:11:52.214965   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:11:52.242255   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 19:11:52.269831   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:11:52.296844   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:11:52.324587   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:11:52.352226   37131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 19:11:52.371652   37131 ssh_runner.go:195] Run: openssl version
	I0429 19:11:52.378694   37131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:11:52.391448   37131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:11:52.397067   37131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:11:52.397115   37131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:11:52.403690   37131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:11:52.414917   37131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:11:52.427327   37131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:11:52.432800   37131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:11:52.432858   37131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:11:52.439978   37131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:11:52.450188   37131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:11:52.461760   37131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:11:52.467142   37131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:11:52.467212   37131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:11:52.473502   37131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:11:52.484335   37131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:11:52.489814   37131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 19:11:52.496946   37131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 19:11:52.503377   37131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 19:11:52.509977   37131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 19:11:52.516805   37131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 19:11:52.523219   37131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 19:11:52.529815   37131 kubeadm.go:391] StartCluster: {Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:11:52.529971   37131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 19:11:52.530028   37131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 19:11:52.571928   37131 cri.go:89] found id: "456e3c46605c79ee57cb93c23059d5f03ffaa307a1bde9a358e8dbf26733090b"
	I0429 19:11:52.571955   37131 cri.go:89] found id: "dc0361e8b66dd1248ecd1214f6b9fa96a060ba135ef3bd13e16b7119c7a30299"
	I0429 19:11:52.571961   37131 cri.go:89] found id: "f89d1200b589323095b891ded44d0f39b5d9d304183f973762186b00994f3cbf"
	I0429 19:11:52.571966   37131 cri.go:89] found id: "09573684ce4866f26fe6dc7ca6f3016d7610603eb5aed63c3c620c2f9a2e95d6"
	I0429 19:11:52.571970   37131 cri.go:89] found id: "c7318d57848f144b2bb27a1ee912ec5726a3996ab5d9a75712fcd8120d1c41df"
	I0429 19:11:52.571974   37131 cri.go:89] found id: "6d85e15a41334e0f49396a7c8783334a7d5e05b649146b665d0437111bf89ade"
	I0429 19:11:52.571978   37131 cri.go:89] found id: "35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b"
	I0429 19:11:52.571982   37131 cri.go:89] found id: "db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe"
	I0429 19:11:52.571986   37131 cri.go:89] found id: "2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5"
	I0429 19:11:52.571993   37131 cri.go:89] found id: "45ced81842ab99aabac98f2ac5d6e1b110a73465d11e56c87d6166d153839862"
	I0429 19:11:52.571997   37131 cri.go:89] found id: "3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad"
	I0429 19:11:52.572001   37131 cri.go:89] found id: "d9513857b60ae4b75efae6de6be9d83d589f9d511ba539d01bc7e371a1a0e090"
	I0429 19:11:52.572008   37131 cri.go:89] found id: "d9139aba22c80eaaf47d55790db8284fc4c3d959ba23904a36880d4d936f4622"
	I0429 19:11:52.572013   37131 cri.go:89] found id: "f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067"
	I0429 19:11:52.572020   37131 cri.go:89] found id: ""
	I0429 19:11:52.572068   37131 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.677052011Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e89442e2-975e-482e-81b7-9fb70e3c333d name=/runtime.v1.RuntimeService/Version
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.678685509Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d808fddc-62aa-418a-822f-9cb2466afc8d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.679352060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714418095679313694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d808fddc-62aa-418a-822f-9cb2466afc8d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.680292694Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55d1d03f-120e-4275-b86c-225e2e45e4d1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.680386194Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55d1d03f-120e-4275-b86c-225e2e45e4d1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.680976977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e191e297281741021e5309da12023e898fb42af47a910b5296fca453cf3a59a9,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714418014575610666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d56e42bb62f0802b29ab5431bfe35a9c4ed282bef23cd07745fd552f016a0c2,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714417998584511876,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dcb7268514a41d84040496fb3f97dd604c39d860db3795b1f536f6388d6c11,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417960583418252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b59ec3dc1e29a4c89fb2d40bf1cb3db18358c929912c01f77801025c117736f,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417958577543599,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49055a12b83d77f6453880eea876f9f8827a406c542e2fae249a50e1417f0583,PodSandboxId:c5f248cdad0a4e0c612e6124cf1ec86f5f5e7e51c8195186b1dae72669e820eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417950948693189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cc8a93682bcdac3c74aabfaf7ac1a16386d5e52b357267a4354a32e4789709,PodSandboxId:19446d08654e14ba0fc1823d9b4dad71e2457cd842f2b4237041e278acb314a5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417928149728501,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0101d9bfd28f4f64a2207189ca2952df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714417919017837307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3234c6a2a02115d1a2b3c8db09477d14fa780e263e04d16a673863bdef318b03,PodSandboxId:1981e51a60fc9bfd1a839f81ae9faf09c9556e372755305615281483a1187fc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417917587343991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa254f41
7bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714417917767697912,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b73fc09f93dd22fd87a22dc40dbad619e67ea8a27
b8e20dcf601f5e0f7ddcb,PodSandboxId:48b8b3bb4968f7483eebf06032b1a8accab07811f969d5231f87a2ccf2c7127f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917914910181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080f231760b7719587b43a8121d8b9e314e646c9be91cd1843e6879b061326ac,PodSandboxId:54d8909c7a920e28849cf9c10442ef50f0faf48e265fd2fa2c1fa044f97f7e93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917809121425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02cf56519f638778caaaa8342593494ae6cecd78d3a8f6122ae98be89f810dae,PodSandboxId:720fc0053e31cfbb6f1170c0811bbea3d7a92267a445f2f9096e17724c461b24,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417917657039067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53824
70eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3bfc6bba83dd30bc001418918d12a37f07affec561132fc8a6bfd32f7fca8c,PodSandboxId:6ff12ce46f5f84dfc87db5bb207fbd9e412ab6d9f83e04aec492de99a510cd30,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417917436371922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f21f1cfa42f5dc7250d4b936ccac831fb3c1028e1832fef69bf664596a8c441,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714417917519326975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes
.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3212de69ac372cf90c1735c062daa36d336d730750901cd5fb573b42df375e,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714417917398524057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kuber
netes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714417414458938341,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kuberne
tes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187512933738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187459716216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714417184691606405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714417163290641629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714417163188867021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55d1d03f-120e-4275-b86c-225e2e45e4d1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.744406575Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a451d10-f6fc-4000-b55e-6a8d0404d99b name=/runtime.v1.RuntimeService/Version
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.744521035Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a451d10-f6fc-4000-b55e-6a8d0404d99b name=/runtime.v1.RuntimeService/Version
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.749028594Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=240fab0b-5ac1-4f3e-944a-a70207ba2544 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.749664450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714418095749627689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=240fab0b-5ac1-4f3e-944a-a70207ba2544 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.750519622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59f3c551-27da-47a7-988d-dfe72dd33787 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.750743034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59f3c551-27da-47a7-988d-dfe72dd33787 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.752145076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e191e297281741021e5309da12023e898fb42af47a910b5296fca453cf3a59a9,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714418014575610666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d56e42bb62f0802b29ab5431bfe35a9c4ed282bef23cd07745fd552f016a0c2,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714417998584511876,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dcb7268514a41d84040496fb3f97dd604c39d860db3795b1f536f6388d6c11,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417960583418252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b59ec3dc1e29a4c89fb2d40bf1cb3db18358c929912c01f77801025c117736f,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417958577543599,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49055a12b83d77f6453880eea876f9f8827a406c542e2fae249a50e1417f0583,PodSandboxId:c5f248cdad0a4e0c612e6124cf1ec86f5f5e7e51c8195186b1dae72669e820eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417950948693189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cc8a93682bcdac3c74aabfaf7ac1a16386d5e52b357267a4354a32e4789709,PodSandboxId:19446d08654e14ba0fc1823d9b4dad71e2457cd842f2b4237041e278acb314a5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417928149728501,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0101d9bfd28f4f64a2207189ca2952df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714417919017837307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3234c6a2a02115d1a2b3c8db09477d14fa780e263e04d16a673863bdef318b03,PodSandboxId:1981e51a60fc9bfd1a839f81ae9faf09c9556e372755305615281483a1187fc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417917587343991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa254f41
7bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714417917767697912,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b73fc09f93dd22fd87a22dc40dbad619e67ea8a27
b8e20dcf601f5e0f7ddcb,PodSandboxId:48b8b3bb4968f7483eebf06032b1a8accab07811f969d5231f87a2ccf2c7127f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917914910181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080f231760b7719587b43a8121d8b9e314e646c9be91cd1843e6879b061326ac,PodSandboxId:54d8909c7a920e28849cf9c10442ef50f0faf48e265fd2fa2c1fa044f97f7e93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917809121425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02cf56519f638778caaaa8342593494ae6cecd78d3a8f6122ae98be89f810dae,PodSandboxId:720fc0053e31cfbb6f1170c0811bbea3d7a92267a445f2f9096e17724c461b24,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417917657039067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53824
70eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3bfc6bba83dd30bc001418918d12a37f07affec561132fc8a6bfd32f7fca8c,PodSandboxId:6ff12ce46f5f84dfc87db5bb207fbd9e412ab6d9f83e04aec492de99a510cd30,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417917436371922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f21f1cfa42f5dc7250d4b936ccac831fb3c1028e1832fef69bf664596a8c441,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714417917519326975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes
.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3212de69ac372cf90c1735c062daa36d336d730750901cd5fb573b42df375e,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714417917398524057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kuber
netes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714417414458938341,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kuberne
tes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187512933738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187459716216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714417184691606405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714417163290641629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714417163188867021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59f3c551-27da-47a7-988d-dfe72dd33787 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.791723988Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=80f2bf0a-acf4-46c3-b02a-89fe29c2e19f name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.792171847Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c5f248cdad0a4e0c612e6124cf1ec86f5f5e7e51c8195186b1dae72669e820eb,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-nst7c,Uid:e810c83c-cdd7-4072-b8e8-319fd5aa4daa,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714417950725459016,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T19:03:29.306602561Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19446d08654e14ba0fc1823d9b4dad71e2457cd842f2b4237041e278acb314a5,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-058855,Uid:0101d9bfd28f4f64a2207189ca2952df,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1714417928037431034,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0101d9bfd28f4f64a2207189ca2952df,},Annotations:map[string]string{kubernetes.io/config.hash: 0101d9bfd28f4f64a2207189ca2952df,kubernetes.io/config.seen: 2024-04-29T19:11:51.299684297Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:48b8b3bb4968f7483eebf06032b1a8accab07811f969d5231f87a2ccf2c7127f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bbq9x,Uid:a016fbf8-4a91-4f2f-97da-44b6e2195885,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714417917116637442,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04
-29T18:59:46.903824062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1572f7da-1bda-4b9e-a5fc-315aae3ba592,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714417917081263699,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":
\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-29T18:59:46.924967484Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:54d8909c7a920e28849cf9c10442ef50f0faf48e265fd2fa2c1fa044f97f7e93,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-njch8,Uid:823d223d-f7bd-4b9c-bdd9-8d0ae063d449,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714417917058890868,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/confi
g.seen: 2024-04-29T18:59:46.911070607Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-058855,Uid:af5ae94dd6fa640c6a87e1b677ca6ae6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714417917044983059,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.52:8443,kubernetes.io/config.hash: af5ae94dd6fa640c6a87e1b677ca6ae6,kubernetes.io/config.seen: 2024-04-29T18:59:29.540557126Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:720fc0053e31cfbb6f1170c0811bbea3d7a92267a445f2f9096e17724c461b24,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-0
58855,Uid:5382470eaba9fa40c319c5aaf393ee38,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714417917035584793,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5382470eaba9fa40c319c5aaf393ee38,kubernetes.io/config.seen: 2024-04-29T18:59:29.540558939Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1981e51a60fc9bfd1a839f81ae9faf09c9556e372755305615281483a1187fc7,Metadata:&PodSandboxMetadata{Name:kube-proxy-xldlc,Uid:a01564cb-ea76-4cc5-abad-d2d70b79bf6d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714417917023940193,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T18:59:42.450912478Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6ff12ce46f5f84dfc87db5bb207fbd9e412ab6d9f83e04aec492de99a510cd30,Metadata:&PodSandboxMetadata{Name:etcd-ha-058855,Uid:cd8cbd0a146b4ae041fb7271005e1408,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714417916998733878,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.52:2379,kubernetes.io/config.hash: cd8cbd0a146b4ae041fb7271005e1408,kubernetes.io/config.seen: 2024-04-29T18:59:29.540555891Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fbe98
7603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&PodSandboxMetadata{Name:kindnet-j42cd,Uid:13d10343-b59f-490f-ac7c-973271cc27d2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714417916989467027,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T18:59:42.461974567Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-058855,Uid:59d92703a0d641b881a7039575606286,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1714417916951173891,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.co
ntainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 59d92703a0d641b881a7039575606286,kubernetes.io/config.seen: 2024-04-29T18:59:29.540558131Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-nst7c,Uid:e810c83c-cdd7-4072-b8e8-319fd5aa4daa,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714417411421039384,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T19:03:29.306602561Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},&PodSandbox{Id:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bbq9x,Uid:a016fbf8-4a91-4f2f-97da-44b6e2195885,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714417187223296534,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T18:59:46.903824062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-njch8,Uid:823d223d-f7bd-4b9c-bdd9-8d0ae063d449,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714417187220860119,Labels:map[string]string{io.kubernetes.container.name: POD,io.kube
rnetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T18:59:46.911070607Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&PodSandboxMetadata{Name:kube-proxy-xldlc,Uid:a01564cb-ea76-4cc5-abad-d2d70b79bf6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714417184559633716,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T18:59:42.450912478Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&Po
dSandbox{Id:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-058855,Uid:5382470eaba9fa40c319c5aaf393ee38,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714417162996193913,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5382470eaba9fa40c319c5aaf393ee38,kubernetes.io/config.seen: 2024-04-29T18:59:22.507897793Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&PodSandboxMetadata{Name:etcd-ha-058855,Uid:cd8cbd0a146b4ae041fb7271005e1408,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1714417162946079316,Labels:map[string]string{component: etcd,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.52:2379,kubernetes.io/config.hash: cd8cbd0a146b4ae041fb7271005e1408,kubernetes.io/config.seen: 2024-04-29T18:59:22.507890334Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=80f2bf0a-acf4-46c3-b02a-89fe29c2e19f name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.793032991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a37600cc-4c63-446e-9499-4c591ca7d16b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.793121897Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a37600cc-4c63-446e-9499-4c591ca7d16b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.793505600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e191e297281741021e5309da12023e898fb42af47a910b5296fca453cf3a59a9,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714418014575610666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d56e42bb62f0802b29ab5431bfe35a9c4ed282bef23cd07745fd552f016a0c2,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714417998584511876,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dcb7268514a41d84040496fb3f97dd604c39d860db3795b1f536f6388d6c11,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417960583418252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b59ec3dc1e29a4c89fb2d40bf1cb3db18358c929912c01f77801025c117736f,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417958577543599,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49055a12b83d77f6453880eea876f9f8827a406c542e2fae249a50e1417f0583,PodSandboxId:c5f248cdad0a4e0c612e6124cf1ec86f5f5e7e51c8195186b1dae72669e820eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417950948693189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cc8a93682bcdac3c74aabfaf7ac1a16386d5e52b357267a4354a32e4789709,PodSandboxId:19446d08654e14ba0fc1823d9b4dad71e2457cd842f2b4237041e278acb314a5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417928149728501,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0101d9bfd28f4f64a2207189ca2952df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714417919017837307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3234c6a2a02115d1a2b3c8db09477d14fa780e263e04d16a673863bdef318b03,PodSandboxId:1981e51a60fc9bfd1a839f81ae9faf09c9556e372755305615281483a1187fc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417917587343991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa254f41
7bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714417917767697912,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b73fc09f93dd22fd87a22dc40dbad619e67ea8a27
b8e20dcf601f5e0f7ddcb,PodSandboxId:48b8b3bb4968f7483eebf06032b1a8accab07811f969d5231f87a2ccf2c7127f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917914910181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080f231760b7719587b43a8121d8b9e314e646c9be91cd1843e6879b061326ac,PodSandboxId:54d8909c7a920e28849cf9c10442ef50f0faf48e265fd2fa2c1fa044f97f7e93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917809121425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02cf56519f638778caaaa8342593494ae6cecd78d3a8f6122ae98be89f810dae,PodSandboxId:720fc0053e31cfbb6f1170c0811bbea3d7a92267a445f2f9096e17724c461b24,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417917657039067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53824
70eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3bfc6bba83dd30bc001418918d12a37f07affec561132fc8a6bfd32f7fca8c,PodSandboxId:6ff12ce46f5f84dfc87db5bb207fbd9e412ab6d9f83e04aec492de99a510cd30,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417917436371922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f21f1cfa42f5dc7250d4b936ccac831fb3c1028e1832fef69bf664596a8c441,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714417917519326975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes
.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3212de69ac372cf90c1735c062daa36d336d730750901cd5fb573b42df375e,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714417917398524057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kuber
netes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714417414458938341,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kuberne
tes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187512933738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187459716216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714417184691606405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714417163290641629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714417163188867021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a37600cc-4c63-446e-9499-4c591ca7d16b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.809114296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dfced6ba-48de-4cdd-834c-5a51b7073df6 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.809562505Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dfced6ba-48de-4cdd-834c-5a51b7073df6 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.811840525Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a8ab0aa-2b08-4e63-bd35-1a9bdbe7aac1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.812686498Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714418095812352491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a8ab0aa-2b08-4e63-bd35-1a9bdbe7aac1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.818809265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88b3b8ed-a66d-49ba-9388-834edfeefe94 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.819859562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88b3b8ed-a66d-49ba-9388-834edfeefe94 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:14:55 ha-058855 crio[4018]: time="2024-04-29 19:14:55.820413173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e191e297281741021e5309da12023e898fb42af47a910b5296fca453cf3a59a9,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714418014575610666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d56e42bb62f0802b29ab5431bfe35a9c4ed282bef23cd07745fd552f016a0c2,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714417998584511876,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dcb7268514a41d84040496fb3f97dd604c39d860db3795b1f536f6388d6c11,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417960583418252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b59ec3dc1e29a4c89fb2d40bf1cb3db18358c929912c01f77801025c117736f,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417958577543599,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49055a12b83d77f6453880eea876f9f8827a406c542e2fae249a50e1417f0583,PodSandboxId:c5f248cdad0a4e0c612e6124cf1ec86f5f5e7e51c8195186b1dae72669e820eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417950948693189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cc8a93682bcdac3c74aabfaf7ac1a16386d5e52b357267a4354a32e4789709,PodSandboxId:19446d08654e14ba0fc1823d9b4dad71e2457cd842f2b4237041e278acb314a5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417928149728501,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0101d9bfd28f4f64a2207189ca2952df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714417919017837307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3234c6a2a02115d1a2b3c8db09477d14fa780e263e04d16a673863bdef318b03,PodSandboxId:1981e51a60fc9bfd1a839f81ae9faf09c9556e372755305615281483a1187fc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417917587343991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa254f41
7bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714417917767697912,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b73fc09f93dd22fd87a22dc40dbad619e67ea8a27
b8e20dcf601f5e0f7ddcb,PodSandboxId:48b8b3bb4968f7483eebf06032b1a8accab07811f969d5231f87a2ccf2c7127f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917914910181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080f231760b7719587b43a8121d8b9e314e646c9be91cd1843e6879b061326ac,PodSandboxId:54d8909c7a920e28849cf9c10442ef50f0faf48e265fd2fa2c1fa044f97f7e93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917809121425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02cf56519f638778caaaa8342593494ae6cecd78d3a8f6122ae98be89f810dae,PodSandboxId:720fc0053e31cfbb6f1170c0811bbea3d7a92267a445f2f9096e17724c461b24,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417917657039067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53824
70eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3bfc6bba83dd30bc001418918d12a37f07affec561132fc8a6bfd32f7fca8c,PodSandboxId:6ff12ce46f5f84dfc87db5bb207fbd9e412ab6d9f83e04aec492de99a510cd30,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417917436371922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f21f1cfa42f5dc7250d4b936ccac831fb3c1028e1832fef69bf664596a8c441,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714417917519326975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes
.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3212de69ac372cf90c1735c062daa36d336d730750901cd5fb573b42df375e,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714417917398524057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kuber
netes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714417414458938341,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kuberne
tes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187512933738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187459716216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714417184691606405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714417163290641629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714417163188867021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88b3b8ed-a66d-49ba-9388-834edfeefe94 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e191e29728174       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       5                   ac8d70341e488       storage-provisioner
	7d56e42bb62f0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               4                   fbe987603e4ff       kindnet-j42cd
	31dcb7268514a       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      2 minutes ago        Running             kube-controller-manager   2                   e82216028935b       kube-controller-manager-ha-058855
	3b59ec3dc1e29       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      2 minutes ago        Running             kube-apiserver            3                   4c1f41849f6cc       kube-apiserver-ha-058855
	49055a12b83d7       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   c5f248cdad0a4       busybox-fc5497c4f-nst7c
	68cc8a93682bc       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   19446d08654e1       kube-vip-ha-058855
	2ca11a172d18b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       4                   ac8d70341e488       storage-provisioner
	86b73fc09f93d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   48b8b3bb4968f       coredns-7db6d8ff4d-bbq9x
	080f231760b77       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   54d8909c7a920       coredns-7db6d8ff4d-njch8
	aa254f417bd8c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               3                   fbe987603e4ff       kindnet-j42cd
	02cf56519f638       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      2 minutes ago        Running             kube-scheduler            1                   720fc0053e31c       kube-scheduler-ha-058855
	3234c6a2a0211       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      2 minutes ago        Running             kube-proxy                1                   1981e51a60fc9       kube-proxy-xldlc
	8f21f1cfa42f5       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      2 minutes ago        Exited              kube-apiserver            2                   4c1f41849f6cc       kube-apiserver-ha-058855
	ae3bfc6bba83d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   6ff12ce46f5f8       etcd-ha-058855
	0d3212de69ac3       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      2 minutes ago        Exited              kube-controller-manager   1                   e82216028935b       kube-controller-manager-ha-058855
	3ebcb4aac0715       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   5d6b9a26ffca4       busybox-fc5497c4f-nst7c
	35b38d136f10c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago       Exited              coredns                   0                   27fc4fec5e3f0       coredns-7db6d8ff4d-njch8
	db099f7f56f78       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago       Exited              coredns                   0                   1050f7bafa98e       coredns-7db6d8ff4d-bbq9x
	2e3b2e1683b77       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      15 minutes ago       Exited              kube-proxy                0                   fe7fa96de2987       kube-proxy-xldlc
	3c1cf7e86cc05       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      15 minutes ago       Exited              kube-scheduler            0                   eaa9cff42f55b       kube-scheduler-ha-058855
	f653b7a6c4efb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      15 minutes ago       Exited              etcd                      0                   40b3f5ad731ff       etcd-ha-058855
	
	
	==> coredns [080f231760b7719587b43a8121d8b9e314e646c9be91cd1843e6879b061326ac] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40728->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40728->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40712->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40712->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40706->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40706->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b] <==
	[INFO] 10.244.1.2:46625 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114006s
	[INFO] 10.244.1.2:57265 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118743s
	[INFO] 10.244.1.2:34075 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000376654s
	[INFO] 10.244.1.2:37316 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000287017s
	[INFO] 10.244.2.2:55857 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148708s
	[INFO] 10.244.2.2:34046 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114435s
	[INFO] 10.244.2.2:59123 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013463s
	[INFO] 10.244.0.4:52788 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139069s
	[INFO] 10.244.0.4:54898 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174069s
	[INFO] 10.244.0.4:50441 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004412s
	[INFO] 10.244.1.2:34029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183007s
	[INFO] 10.244.1.2:34413 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011174s
	[INFO] 10.244.1.2:46424 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144489s
	[INFO] 10.244.1.2:35983 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116269s
	[INFO] 10.244.2.2:36513 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000459857s
	[INFO] 10.244.0.4:40033 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000351605s
	[INFO] 10.244.0.4:45496 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128261s
	[INFO] 10.244.1.2:58777 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000204086s
	[INFO] 10.244.2.2:46697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227863s
	[INFO] 10.244.2.2:60992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138077s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [86b73fc09f93dd22fd87a22dc40dbad619e67ea8a27b8e20dcf601f5e0f7ddcb] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38004->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38004->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38026->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38026->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38012->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38012->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe] <==
	[INFO] 10.244.0.4:38237 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178889s
	[INFO] 10.244.1.2:51028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274871s
	[INFO] 10.244.1.2:44471 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001330026s
	[INFO] 10.244.1.2:42432 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122996s
	[INFO] 10.244.2.2:59580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000294012s
	[INFO] 10.244.2.2:60659 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00179161s
	[INFO] 10.244.2.2:39549 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000317743s
	[INFO] 10.244.2.2:43315 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001176961s
	[INFO] 10.244.2.2:32992 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190177s
	[INFO] 10.244.0.4:46409 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000047581s
	[INFO] 10.244.2.2:53037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141835s
	[INFO] 10.244.2.2:44640 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000203835s
	[INFO] 10.244.2.2:58171 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090591s
	[INFO] 10.244.0.4:44158 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106787s
	[INFO] 10.244.0.4:57643 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000199048s
	[INFO] 10.244.1.2:57285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127384s
	[INFO] 10.244.1.2:53223 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223061s
	[INFO] 10.244.1.2:54113 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106292s
	[INFO] 10.244.2.2:57470 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012081s
	[INFO] 10.244.2.2:35174 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139962s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-058855
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T18_59_30_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 18:59:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:14:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:12:41 +0000   Mon, 29 Apr 2024 18:59:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:12:41 +0000   Mon, 29 Apr 2024 18:59:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:12:41 +0000   Mon, 29 Apr 2024 18:59:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:12:41 +0000   Mon, 29 Apr 2024 18:59:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    ha-058855
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4dd245ae2fbf4ffeb364af3ff6801808
	  System UUID:                4dd245ae-2fbf-4ffe-b364-af3ff6801808
	  Boot ID:                    41ab0acc-a7d3-4500-bada-adc41451a660
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nst7c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-bbq9x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-njch8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-058855                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-j42cd                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-058855             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-058855    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-xldlc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-058855             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-058855                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   Starting                 2m14s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-058855 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-058855 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-058855 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                    node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-058855 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Warning  ContainerGCFailed        3m27s (x2 over 4m27s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m3s                   node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Normal   RegisteredNode           2m2s                   node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Normal   RegisteredNode           27s                    node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	
	
	Name:               ha-058855-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_01_50_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:01:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:14:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:13:26 +0000   Mon, 29 Apr 2024 19:12:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:13:26 +0000   Mon, 29 Apr 2024 19:12:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:13:26 +0000   Mon, 29 Apr 2024 19:12:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:13:26 +0000   Mon, 29 Apr 2024 19:12:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-058855-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea727b7dfb674d998bb0a6c08dea140b
	  System UUID:                ea727b7d-fb67-4d99-8bb0-a6c08dea140b
	  Boot ID:                    8e31da5f-4ee6-43d7-b240-df0366f65859
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pr84n                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-058855-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-xdtp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-058855-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-058855-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-nz2rv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-058855-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-058855-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-058855-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-058855-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-058855-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  NodeNotReady             9m45s                  node-controller  Node ha-058855-m02 status is now: NodeNotReady
	  Normal  Starting                 2m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node ha-058855-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node ha-058855-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m39s (x7 over 2m39s)  kubelet          Node ha-058855-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m3s                   node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  RegisteredNode           2m2s                   node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  RegisteredNode           27s                    node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	
	
	Name:               ha-058855-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_03_08_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:03:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:14:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:14:24 +0000   Mon, 29 Apr 2024 19:13:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:14:24 +0000   Mon, 29 Apr 2024 19:13:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:14:24 +0000   Mon, 29 Apr 2024 19:13:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:14:24 +0000   Mon, 29 Apr 2024 19:13:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    ha-058855-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b6bc3a75b3f42f3aa365abccb76fd49
	  System UUID:                5b6bc3a7-5b3f-42f3-aa36-5abccb76fd49
	  Boot ID:                    31c1a5a8-64d6-4f27-813c-685a2be04483
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xll26                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-058855-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-m4fgv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-058855-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-058855-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-29svc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-058855-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-058855-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-058855-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-058855-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-058855-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-058855-m03 event: Registered Node ha-058855-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-058855-m03 event: Registered Node ha-058855-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-058855-m03 event: Registered Node ha-058855-m03 in Controller
	  Normal   RegisteredNode           2m3s               node-controller  Node ha-058855-m03 event: Registered Node ha-058855-m03 in Controller
	  Normal   RegisteredNode           2m2s               node-controller  Node ha-058855-m03 event: Registered Node ha-058855-m03 in Controller
	  Normal   NodeNotReady             83s                node-controller  Node ha-058855-m03 status is now: NodeNotReady
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  62s (x2 over 62s)  kubelet          Node ha-058855-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x2 over 62s)  kubelet          Node ha-058855-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x2 over 62s)  kubelet          Node ha-058855-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 62s                kubelet          Node ha-058855-m03 has been rebooted, boot id: 31c1a5a8-64d6-4f27-813c-685a2be04483
	  Normal   NodeReady                62s                kubelet          Node ha-058855-m03 status is now: NodeReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-058855-m03 event: Registered Node ha-058855-m03 in Controller
	
	
	Name:               ha-058855-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_04_09_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:04:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:14:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:14:47 +0000   Mon, 29 Apr 2024 19:14:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:14:47 +0000   Mon, 29 Apr 2024 19:14:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:14:47 +0000   Mon, 29 Apr 2024 19:14:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:14:47 +0000   Mon, 29 Apr 2024 19:14:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    ha-058855-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbc9ec7037144061a802010c8eaa7400
	  System UUID:                fbc9ec70-3714-4061-a802-010c8eaa7400
	  Boot ID:                    5e7b908f-742d-4b2a-be01-e8237f91389e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8mzbn       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-7qjvk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-058855-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-058855-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-058855-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-058855-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m3s               node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal   RegisteredNode           2m2s               node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal   NodeNotReady             83s                node-controller  Node ha-058855-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-058855-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-058855-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-058855-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-058855-m04 has been rebooted, boot id: 5e7b908f-742d-4b2a-be01-e8237f91389e
	  Normal   NodeReady                9s                 kubelet          Node ha-058855-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.063053] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066472] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.176661] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.148881] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.312890] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.946074] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.072175] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.019108] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +1.004098] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.172368] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +0.079206] kauditd_printk_skb: 30 callbacks suppressed
	[ +15.239291] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.268922] kauditd_printk_skb: 74 callbacks suppressed
	[Apr29 19:08] kauditd_printk_skb: 1 callbacks suppressed
	[Apr29 19:11] systemd-fstab-generator[3936]: Ignoring "noauto" option for root device
	[  +0.161031] systemd-fstab-generator[3948]: Ignoring "noauto" option for root device
	[  +0.208562] systemd-fstab-generator[3962]: Ignoring "noauto" option for root device
	[  +0.161312] systemd-fstab-generator[3974]: Ignoring "noauto" option for root device
	[  +0.320141] systemd-fstab-generator[4002]: Ignoring "noauto" option for root device
	[  +5.999485] systemd-fstab-generator[4104]: Ignoring "noauto" option for root device
	[  +0.096716] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.516553] kauditd_printk_skb: 12 callbacks suppressed
	[Apr29 19:12] kauditd_printk_skb: 87 callbacks suppressed
	[ +30.542820] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.806347] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [ae3bfc6bba83dd30bc001418918d12a37f07affec561132fc8a6bfd32f7fca8c] <==
	{"level":"warn","ts":"2024-04-29T19:13:52.002968Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"51d96a7d7a2ba286","error":"Get \"https://192.168.39.215:2380/version\": dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:13:53.617273Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"51d96a7d7a2ba286","rtt":"0s","error":"dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:13:53.617519Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"51d96a7d7a2ba286","rtt":"0s","error":"dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:13:56.005874Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.215:2380/version","remote-member-id":"51d96a7d7a2ba286","error":"Get \"https://192.168.39.215:2380/version\": dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:13:56.006643Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"51d96a7d7a2ba286","error":"Get \"https://192.168.39.215:2380/version\": dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:13:58.617502Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"51d96a7d7a2ba286","rtt":"0s","error":"dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:13:58.617649Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"51d96a7d7a2ba286","rtt":"0s","error":"dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:14:00.009304Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.215:2380/version","remote-member-id":"51d96a7d7a2ba286","error":"Get \"https://192.168.39.215:2380/version\": dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:14:00.00942Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"51d96a7d7a2ba286","error":"Get \"https://192.168.39.215:2380/version\": dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:14:03.618491Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"51d96a7d7a2ba286","rtt":"0s","error":"dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:14:03.618616Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"51d96a7d7a2ba286","rtt":"0s","error":"dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:14:04.011518Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.215:2380/version","remote-member-id":"51d96a7d7a2ba286","error":"Get \"https://192.168.39.215:2380/version\": dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:14:04.011882Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"51d96a7d7a2ba286","error":"Get \"https://192.168.39.215:2380/version\": dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:14:08.014178Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.215:2380/version","remote-member-id":"51d96a7d7a2ba286","error":"Get \"https://192.168.39.215:2380/version\": dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:14:08.014337Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"51d96a7d7a2ba286","error":"Get \"https://192.168.39.215:2380/version\": dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:14:08.619622Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"51d96a7d7a2ba286","rtt":"0s","error":"dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-29T19:14:08.620049Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"51d96a7d7a2ba286","rtt":"0s","error":"dial tcp 192.168.39.215:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-29T19:14:09.774528Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:14:09.775212Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:14:09.778635Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:14:09.807532Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3baf479dc31b93a9","to":"51d96a7d7a2ba286","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-29T19:14:09.807619Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:14:09.811457Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3baf479dc31b93a9","to":"51d96a7d7a2ba286","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-29T19:14:09.811541Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"warn","ts":"2024-04-29T19:14:09.817578Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.215:44990","server-name":"","error":"EOF"}
	
	
	==> etcd [f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067] <==
	2024/04/29 19:10:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-29T19:10:12.935884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.693648901s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-04-29T19:10:12.935899Z","caller":"traceutil/trace.go:171","msg":"trace[1300071160] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; }","duration":"7.693677573s","start":"2024-04-29T19:10:05.242217Z","end":"2024-04-29T19:10:12.935895Z","steps":["trace[1300071160] 'agreement among raft nodes before linearized reading'  (duration: 7.693657615s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T19:10:12.935918Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T19:10:05.242213Z","time spent":"7.693697226s","remote":"127.0.0.1:57172","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 "}
	2024/04/29 19:10:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-29T19:10:12.970694Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.52:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T19:10:12.970816Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.52:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T19:10:12.970991Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3baf479dc31b93a9","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-29T19:10:12.971277Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971327Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971398Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971552Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971614Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971658Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971692Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971701Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.97171Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.971732Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.971955Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.971989Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.972018Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.972058Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.975995Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2024-04-29T19:10:12.976324Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2024-04-29T19:10:12.976408Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-058855","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.52:2380"],"advertise-client-urls":["https://192.168.39.52:2379"]}
	
	
	==> kernel <==
	 19:14:56 up 16 min,  0 users,  load average: 0.35, 0.42, 0.29
	Linux ha-058855 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7d56e42bb62f0802b29ab5431bfe35a9c4ed282bef23cd07745fd552f016a0c2] <==
	I0429 19:14:19.946004       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:14:29.962844       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:14:29.962959       1 main.go:227] handling current node
	I0429 19:14:29.963007       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:14:29.963032       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:14:29.963313       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0429 19:14:29.963372       1 main.go:250] Node ha-058855-m03 has CIDR [10.244.2.0/24] 
	I0429 19:14:29.963543       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:14:29.963581       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:14:39.979560       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:14:39.979615       1 main.go:227] handling current node
	I0429 19:14:39.979631       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:14:39.979640       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:14:39.979897       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0429 19:14:39.979910       1 main.go:250] Node ha-058855-m03 has CIDR [10.244.2.0/24] 
	I0429 19:14:39.979985       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:14:39.980027       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:14:49.990369       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:14:49.990527       1 main.go:227] handling current node
	I0429 19:14:49.990562       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:14:49.990593       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:14:49.990752       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0429 19:14:49.990883       1 main.go:250] Node ha-058855-m03 has CIDR [10.244.2.0/24] 
	I0429 19:14:49.990980       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:14:49.991005       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [aa254f417bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc] <==
	I0429 19:11:58.370539       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0429 19:11:58.373829       1 main.go:107] hostIP = 192.168.39.52
	podIP = 192.168.39.52
	I0429 19:11:58.374108       1 main.go:116] setting mtu 1500 for CNI 
	I0429 19:11:58.440085       1 main.go:146] kindnetd IP family: "ipv4"
	I0429 19:11:58.440144       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0429 19:12:08.676933       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0429 19:12:18.680550       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0429 19:12:19.934321       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0429 19:12:23.006275       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0429 19:12:26.009267       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [3b59ec3dc1e29a4c89fb2d40bf1cb3db18358c929912c01f77801025c117736f] <==
	I0429 19:12:40.654500       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0429 19:12:40.654563       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0429 19:12:40.824592       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 19:12:40.825243       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 19:12:40.845910       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 19:12:40.845968       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 19:12:40.846073       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 19:12:40.846190       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 19:12:40.854608       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 19:12:40.854875       1 aggregator.go:165] initial CRD sync complete...
	I0429 19:12:40.854899       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 19:12:40.854909       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 19:12:40.854916       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:12:40.886194       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 19:12:40.886247       1 policy_source.go:224] refreshing policies
	I0429 19:12:40.887301       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 19:12:40.904619       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:12:40.913109       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0429 19:12:40.928265       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.215]
	I0429 19:12:40.929742       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 19:12:40.964468       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0429 19:12:40.973274       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0429 19:12:41.642926       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0429 19:12:42.399574       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.215 192.168.39.27 192.168.39.52]
	W0429 19:12:52.404343       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.27 192.168.39.52]
	
	
	==> kube-apiserver [8f21f1cfa42f5dc7250d4b936ccac831fb3c1028e1832fef69bf664596a8c441] <==
	I0429 19:11:58.132927       1 options.go:221] external host was not specified, using 192.168.39.52
	I0429 19:11:58.134086       1 server.go:148] Version: v1.30.0
	I0429 19:11:58.134135       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:11:58.946383       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0429 19:11:58.947231       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0429 19:11:58.947360       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0429 19:11:58.947541       1 instance.go:299] Using reconciler: lease
	I0429 19:11:58.947306       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0429 19:12:18.942597       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0429 19:12:18.942596       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0429 19:12:18.948235       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0429 19:12:18.948484       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [0d3212de69ac372cf90c1735c062daa36d336d730750901cd5fb573b42df375e] <==
	I0429 19:11:59.383043       1 serving.go:380] Generated self-signed cert in-memory
	I0429 19:11:59.854921       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0429 19:11:59.855008       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:11:59.856659       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0429 19:11:59.856898       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 19:11:59.856921       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 19:11:59.856933       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0429 19:12:19.957270       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.52:8443/healthz\": dial tcp 192.168.39.52:8443: connect: connection refused"
	
	
	==> kube-controller-manager [31dcb7268514a41d84040496fb3f97dd604c39d860db3795b1f536f6388d6c11] <==
	I0429 19:12:54.631979       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0429 19:12:54.632023       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0429 19:12:54.632045       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0429 19:12:54.632069       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0429 19:12:54.636884       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0429 19:12:54.638835       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0429 19:12:55.060029       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:12:55.063605       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:12:55.063706       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 19:12:57.256935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.411803ms"
	I0429 19:12:57.258168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.958µs"
	I0429 19:12:57.330222       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-jsbv2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-jsbv2\": the object has been modified; please apply your changes to the latest version and try again"
	I0429 19:12:57.330448       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e57e4e85-b6e4-4e6c-b9c0-1b41406939ee", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-jsbv2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-jsbv2": the object has been modified; please apply your changes to the latest version and try again
	I0429 19:13:17.255211       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.824285ms"
	I0429 19:13:17.256081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.433µs"
	I0429 19:13:17.262843       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-jsbv2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-jsbv2\": the object has been modified; please apply your changes to the latest version and try again"
	I0429 19:13:17.263636       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e57e4e85-b6e4-4e6c-b9c0-1b41406939ee", APIVersion:"v1", ResourceVersion:"241", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-jsbv2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-jsbv2": the object has been modified; please apply your changes to the latest version and try again
	E0429 19:13:33.327952       1 daemon_controller.go:324] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"da7bc096-7969-4c21-9269-3f870dc74abd", ResourceVersion:"2385", Generation:1, CreationTimestamp:time.Date(2024, time.April, 29, 18, 59, 30, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadat
a\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240202-8f1494ea\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hos
tPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0021d6400), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg",
VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002df1338), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*
v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002df1350), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), Do
wnwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc002df1368), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.
ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Contai
ner{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240202-8f1494ea", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0021d6420)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0021d6460)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:r
esource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe
:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00305b6e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0031262d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003069000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00310cf50)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc003126320)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled
on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0429 19:13:33.442667       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.099057ms"
	I0429 19:13:33.444074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="101.78µs"
	I0429 19:13:55.375433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.222µs"
	I0429 19:14:13.095902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.330742ms"
	I0429 19:14:13.096352       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.568µs"
	I0429 19:14:47.787338       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-058855-m04"
	
	
	==> kube-proxy [2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5] <==
	E0429 19:09:02.559525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:05.631064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:05.631143       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:05.631219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:05.631265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:05.631451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:05.631504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:09.856229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:09.856269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:12.930544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:12.930639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:12.930749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:12.930858       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:22.144711       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:22.144955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:25.216026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:25.216097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:25.216292       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:25.216346       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:43.647034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:43.647716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:52.864464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:52.864734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:52.864607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:52.864899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [3234c6a2a02115d1a2b3c8db09477d14fa780e263e04d16a673863bdef318b03] <==
	I0429 19:11:59.708697       1 server_linux.go:69] "Using iptables proxy"
	E0429 19:12:01.887290       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-058855\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 19:12:04.958370       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-058855\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 19:12:08.031520       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-058855\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 19:12:14.176187       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-058855\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 19:12:23.391453       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-058855\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0429 19:12:41.425661       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.52"]
	I0429 19:12:41.508235       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:12:41.508332       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:12:41.508353       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:12:41.511642       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:12:41.512052       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:12:41.512100       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:12:41.514289       1 config.go:192] "Starting service config controller"
	I0429 19:12:41.514348       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:12:41.514414       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:12:41.514450       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:12:41.515318       1 config.go:319] "Starting node config controller"
	I0429 19:12:41.515362       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:12:41.614914       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:12:41.615037       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:12:41.615905       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [02cf56519f638778caaaa8342593494ae6cecd78d3a8f6122ae98be89f810dae] <==
	W0429 19:12:30.611206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.52:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:30.611313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.52:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:33.894563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.52:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:33.894698       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.52:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:34.104204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.52:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:34.104302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.52:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:34.169432       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.52:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:34.169498       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.52:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:36.140567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.52:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:36.140616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.52:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:36.662102       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.52:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:36.662244       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.52:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:37.717059       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.52:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:37.717150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.52:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:37.835575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.52:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:37.835663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.52:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:38.207302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.52:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:38.207669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.52:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:40.663650       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:12:40.663717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:12:40.663887       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:12:40.663928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:12:40.664018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 19:12:40.664031       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 19:12:54.370750       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad] <==
	W0429 19:10:05.873309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 19:10:05.873469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 19:10:06.230848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:10:06.230984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:10:06.371480       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 19:10:06.371535       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 19:10:06.406073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 19:10:06.406142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 19:10:06.431343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 19:10:06.431459       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 19:10:06.538567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 19:10:06.538899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 19:10:07.093946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 19:10:07.094036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 19:10:07.178140       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 19:10:07.178200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 19:10:07.391982       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 19:10:07.392186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 19:10:07.519987       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 19:10:07.520074       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 19:10:07.899992       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 19:10:07.900060       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 19:10:07.959224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:10:07.959348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:10:12.895846       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 29 19:12:51 ha-058855 kubelet[1376]: I0429 19:12:51.560388    1376 scope.go:117] "RemoveContainer" containerID="aa254f417bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc"
	Apr 29 19:12:51 ha-058855 kubelet[1376]: E0429 19:12:51.560690    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-j42cd_kube-system(13d10343-b59f-490f-ac7c-973271cc27d2)\"" pod="kube-system/kindnet-j42cd" podUID="13d10343-b59f-490f-ac7c-973271cc27d2"
	Apr 29 19:12:53 ha-058855 kubelet[1376]: I0429 19:12:53.561135    1376 scope.go:117] "RemoveContainer" containerID="2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791"
	Apr 29 19:12:53 ha-058855 kubelet[1376]: E0429 19:12:53.561340    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1572f7da-1bda-4b9e-a5fc-315aae3ba592)\"" pod="kube-system/storage-provisioner" podUID="1572f7da-1bda-4b9e-a5fc-315aae3ba592"
	Apr 29 19:13:06 ha-058855 kubelet[1376]: I0429 19:13:06.560891    1376 scope.go:117] "RemoveContainer" containerID="aa254f417bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc"
	Apr 29 19:13:06 ha-058855 kubelet[1376]: E0429 19:13:06.561179    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-j42cd_kube-system(13d10343-b59f-490f-ac7c-973271cc27d2)\"" pod="kube-system/kindnet-j42cd" podUID="13d10343-b59f-490f-ac7c-973271cc27d2"
	Apr 29 19:13:06 ha-058855 kubelet[1376]: I0429 19:13:06.561477    1376 scope.go:117] "RemoveContainer" containerID="2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791"
	Apr 29 19:13:06 ha-058855 kubelet[1376]: E0429 19:13:06.561700    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1572f7da-1bda-4b9e-a5fc-315aae3ba592)\"" pod="kube-system/storage-provisioner" podUID="1572f7da-1bda-4b9e-a5fc-315aae3ba592"
	Apr 29 19:13:09 ha-058855 kubelet[1376]: I0429 19:13:09.619523    1376 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-nst7c" podStartSLOduration=577.77841775 podStartE2EDuration="9m40.61949424s" podCreationTimestamp="2024-04-29 19:03:29 +0000 UTC" firstStartedPulling="2024-04-29 19:03:31.59844546 +0000 UTC m=+242.169893223" lastFinishedPulling="2024-04-29 19:03:34.439521947 +0000 UTC m=+245.010969713" observedRunningTime="2024-04-29 19:03:34.791960004 +0000 UTC m=+245.363407788" watchObservedRunningTime="2024-04-29 19:13:09.61949424 +0000 UTC m=+820.190942024"
	Apr 29 19:13:18 ha-058855 kubelet[1376]: I0429 19:13:18.561153    1376 scope.go:117] "RemoveContainer" containerID="aa254f417bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc"
	Apr 29 19:13:20 ha-058855 kubelet[1376]: I0429 19:13:20.561536    1376 scope.go:117] "RemoveContainer" containerID="2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791"
	Apr 29 19:13:20 ha-058855 kubelet[1376]: E0429 19:13:20.561884    1376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(1572f7da-1bda-4b9e-a5fc-315aae3ba592)\"" pod="kube-system/storage-provisioner" podUID="1572f7da-1bda-4b9e-a5fc-315aae3ba592"
	Apr 29 19:13:29 ha-058855 kubelet[1376]: I0429 19:13:29.561383    1376 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-058855" podUID="76e512c7-e0ea-417e-8239-63bb073dc04d"
	Apr 29 19:13:29 ha-058855 kubelet[1376]: I0429 19:13:29.590314    1376 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-058855"
	Apr 29 19:13:29 ha-058855 kubelet[1376]: E0429 19:13:29.607747    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:13:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:13:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:13:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:13:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:13:34 ha-058855 kubelet[1376]: I0429 19:13:34.560378    1376 scope.go:117] "RemoveContainer" containerID="2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791"
	Apr 29 19:14:29 ha-058855 kubelet[1376]: E0429 19:14:29.601924    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:14:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:14:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:14:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:14:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 19:14:55.253045   38656 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18774-7754/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-058855 -n ha-058855
helpers_test.go:261: (dbg) Run:  kubectl --context ha-058855 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (408.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-058855 stop -v=7 --alsologtostderr: exit status 82 (2m0.478339148s)

                                                
                                                
-- stdout --
	* Stopping node "ha-058855-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:15:15.862699   39066 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:15:15.862824   39066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:15:15.862839   39066 out.go:304] Setting ErrFile to fd 2...
	I0429 19:15:15.862855   39066 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:15:15.863314   39066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:15:15.863596   39066 out.go:298] Setting JSON to false
	I0429 19:15:15.863668   39066 mustload.go:65] Loading cluster: ha-058855
	I0429 19:15:15.864030   39066 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:15:15.864126   39066 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 19:15:15.864304   39066 mustload.go:65] Loading cluster: ha-058855
	I0429 19:15:15.864423   39066 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:15:15.864453   39066 stop.go:39] StopHost: ha-058855-m04
	I0429 19:15:15.864814   39066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:15:15.864851   39066 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:15:15.880270   39066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
	I0429 19:15:15.880771   39066 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:15:15.881445   39066 main.go:141] libmachine: Using API Version  1
	I0429 19:15:15.881473   39066 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:15:15.881790   39066 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:15:15.884241   39066 out.go:177] * Stopping node "ha-058855-m04"  ...
	I0429 19:15:15.885715   39066 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 19:15:15.885739   39066 main.go:141] libmachine: (ha-058855-m04) Calling .DriverName
	I0429 19:15:15.885962   39066 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 19:15:15.885988   39066 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHHostname
	I0429 19:15:15.888766   39066 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:15:15.889226   39066 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:14:42 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:15:15.889259   39066 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:15:15.889485   39066 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHPort
	I0429 19:15:15.889658   39066 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHKeyPath
	I0429 19:15:15.889813   39066 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHUsername
	I0429 19:15:15.890044   39066 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m04/id_rsa Username:docker}
	I0429 19:15:15.974186   39066 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 19:15:16.027953   39066 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 19:15:16.082559   39066 main.go:141] libmachine: Stopping "ha-058855-m04"...
	I0429 19:15:16.082587   39066 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:15:16.084265   39066 main.go:141] libmachine: (ha-058855-m04) Calling .Stop
	I0429 19:15:16.087687   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 0/120
	I0429 19:15:17.089050   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 1/120
	I0429 19:15:18.090234   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 2/120
	I0429 19:15:19.091662   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 3/120
	I0429 19:15:20.092885   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 4/120
	I0429 19:15:21.095116   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 5/120
	I0429 19:15:22.096630   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 6/120
	I0429 19:15:23.097980   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 7/120
	I0429 19:15:24.099217   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 8/120
	I0429 19:15:25.100440   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 9/120
	I0429 19:15:26.101611   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 10/120
	I0429 19:15:27.103150   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 11/120
	I0429 19:15:28.105518   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 12/120
	I0429 19:15:29.107249   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 13/120
	I0429 19:15:30.109550   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 14/120
	I0429 19:15:31.111643   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 15/120
	I0429 19:15:32.113420   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 16/120
	I0429 19:15:33.114845   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 17/120
	I0429 19:15:34.116716   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 18/120
	I0429 19:15:35.117927   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 19/120
	I0429 19:15:36.119779   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 20/120
	I0429 19:15:37.121744   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 21/120
	I0429 19:15:38.123094   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 22/120
	I0429 19:15:39.124596   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 23/120
	I0429 19:15:40.126138   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 24/120
	I0429 19:15:41.127959   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 25/120
	I0429 19:15:42.129593   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 26/120
	I0429 19:15:43.131246   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 27/120
	I0429 19:15:44.133281   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 28/120
	I0429 19:15:45.134998   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 29/120
	I0429 19:15:46.137072   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 30/120
	I0429 19:15:47.138480   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 31/120
	I0429 19:15:48.139783   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 32/120
	I0429 19:15:49.141315   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 33/120
	I0429 19:15:50.142776   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 34/120
	I0429 19:15:51.144360   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 35/120
	I0429 19:15:52.145704   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 36/120
	I0429 19:15:53.147456   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 37/120
	I0429 19:15:54.148996   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 38/120
	I0429 19:15:55.150470   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 39/120
	I0429 19:15:56.152654   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 40/120
	I0429 19:15:57.153845   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 41/120
	I0429 19:15:58.155532   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 42/120
	I0429 19:15:59.156946   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 43/120
	I0429 19:16:00.158311   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 44/120
	I0429 19:16:01.159716   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 45/120
	I0429 19:16:02.161068   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 46/120
	I0429 19:16:03.163353   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 47/120
	I0429 19:16:04.164839   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 48/120
	I0429 19:16:05.166633   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 49/120
	I0429 19:16:06.168632   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 50/120
	I0429 19:16:07.170150   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 51/120
	I0429 19:16:08.171701   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 52/120
	I0429 19:16:09.173354   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 53/120
	I0429 19:16:10.174469   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 54/120
	I0429 19:16:11.176389   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 55/120
	I0429 19:16:12.178001   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 56/120
	I0429 19:16:13.179721   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 57/120
	I0429 19:16:14.181259   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 58/120
	I0429 19:16:15.182637   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 59/120
	I0429 19:16:16.184719   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 60/120
	I0429 19:16:17.186202   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 61/120
	I0429 19:16:18.187756   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 62/120
	I0429 19:16:19.189093   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 63/120
	I0429 19:16:20.190772   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 64/120
	I0429 19:16:21.192571   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 65/120
	I0429 19:16:22.194045   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 66/120
	I0429 19:16:23.195405   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 67/120
	I0429 19:16:24.197156   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 68/120
	I0429 19:16:25.198511   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 69/120
	I0429 19:16:26.200725   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 70/120
	I0429 19:16:27.202054   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 71/120
	I0429 19:16:28.203557   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 72/120
	I0429 19:16:29.204856   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 73/120
	I0429 19:16:30.206134   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 74/120
	I0429 19:16:31.208377   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 75/120
	I0429 19:16:32.209872   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 76/120
	I0429 19:16:33.211117   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 77/120
	I0429 19:16:34.212634   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 78/120
	I0429 19:16:35.213977   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 79/120
	I0429 19:16:36.215721   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 80/120
	I0429 19:16:37.217157   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 81/120
	I0429 19:16:38.218790   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 82/120
	I0429 19:16:39.220735   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 83/120
	I0429 19:16:40.222022   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 84/120
	I0429 19:16:41.224043   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 85/120
	I0429 19:16:42.225532   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 86/120
	I0429 19:16:43.226925   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 87/120
	I0429 19:16:44.228387   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 88/120
	I0429 19:16:45.229900   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 89/120
	I0429 19:16:46.231999   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 90/120
	I0429 19:16:47.234359   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 91/120
	I0429 19:16:48.235700   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 92/120
	I0429 19:16:49.236905   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 93/120
	I0429 19:16:50.238407   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 94/120
	I0429 19:16:51.240370   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 95/120
	I0429 19:16:52.241835   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 96/120
	I0429 19:16:53.243662   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 97/120
	I0429 19:16:54.245631   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 98/120
	I0429 19:16:55.246859   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 99/120
	I0429 19:16:56.248415   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 100/120
	I0429 19:16:57.249712   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 101/120
	I0429 19:16:58.250810   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 102/120
	I0429 19:16:59.252390   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 103/120
	I0429 19:17:00.253699   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 104/120
	I0429 19:17:01.255390   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 105/120
	I0429 19:17:02.256615   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 106/120
	I0429 19:17:03.258253   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 107/120
	I0429 19:17:04.260407   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 108/120
	I0429 19:17:05.261646   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 109/120
	I0429 19:17:06.263892   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 110/120
	I0429 19:17:07.265209   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 111/120
	I0429 19:17:08.266544   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 112/120
	I0429 19:17:09.268921   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 113/120
	I0429 19:17:10.270276   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 114/120
	I0429 19:17:11.271993   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 115/120
	I0429 19:17:12.273338   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 116/120
	I0429 19:17:13.274854   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 117/120
	I0429 19:17:14.276082   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 118/120
	I0429 19:17:15.277420   39066 main.go:141] libmachine: (ha-058855-m04) Waiting for machine to stop 119/120
	I0429 19:17:16.278407   39066 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0429 19:17:16.278472   39066 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0429 19:17:16.280355   39066 out.go:177] 
	W0429 19:17:16.281544   39066 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0429 19:17:16.281559   39066 out.go:239] * 
	* 
	W0429 19:17:16.283750   39066 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 19:17:16.285276   39066 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-058855 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr: exit status 3 (19.055412337s)

                                                
                                                
-- stdout --
	ha-058855
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-058855-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:17:16.343915   39516 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:17:16.344080   39516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:17:16.344100   39516 out.go:304] Setting ErrFile to fd 2...
	I0429 19:17:16.344108   39516 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:17:16.344443   39516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:17:16.344648   39516 out.go:298] Setting JSON to false
	I0429 19:17:16.344677   39516 mustload.go:65] Loading cluster: ha-058855
	I0429 19:17:16.344751   39516 notify.go:220] Checking for updates...
	I0429 19:17:16.345094   39516 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:17:16.345110   39516 status.go:255] checking status of ha-058855 ...
	I0429 19:17:16.345520   39516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:17:16.345572   39516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:17:16.361527   39516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44983
	I0429 19:17:16.362039   39516 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:17:16.362622   39516 main.go:141] libmachine: Using API Version  1
	I0429 19:17:16.362647   39516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:17:16.362981   39516 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:17:16.363161   39516 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:17:16.364706   39516 status.go:330] ha-058855 host status = "Running" (err=<nil>)
	I0429 19:17:16.364723   39516 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:17:16.365021   39516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:17:16.365065   39516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:17:16.380196   39516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36415
	I0429 19:17:16.380667   39516 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:17:16.381106   39516 main.go:141] libmachine: Using API Version  1
	I0429 19:17:16.381133   39516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:17:16.381400   39516 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:17:16.381607   39516 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:17:16.384210   39516 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:17:16.384642   39516 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:17:16.384668   39516 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:17:16.384812   39516 host.go:66] Checking if "ha-058855" exists ...
	I0429 19:17:16.385107   39516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:17:16.385142   39516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:17:16.400252   39516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I0429 19:17:16.400655   39516 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:17:16.401159   39516 main.go:141] libmachine: Using API Version  1
	I0429 19:17:16.401180   39516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:17:16.401507   39516 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:17:16.401690   39516 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:17:16.401874   39516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:17:16.401905   39516 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:17:16.404697   39516 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:17:16.405119   39516 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:17:16.405144   39516 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:17:16.405280   39516 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:17:16.405435   39516 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:17:16.405560   39516 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:17:16.405703   39516 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:17:16.505111   39516 ssh_runner.go:195] Run: systemctl --version
	I0429 19:17:16.513906   39516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:17:16.537344   39516 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:17:16.537376   39516 api_server.go:166] Checking apiserver status ...
	I0429 19:17:16.537406   39516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:17:16.554498   39516 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5264/cgroup
	W0429 19:17:16.565815   39516 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5264/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:17:16.565862   39516 ssh_runner.go:195] Run: ls
	I0429 19:17:16.572447   39516 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:17:16.578367   39516 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:17:16.578396   39516 status.go:422] ha-058855 apiserver status = Running (err=<nil>)
	I0429 19:17:16.578408   39516 status.go:257] ha-058855 status: &{Name:ha-058855 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:17:16.578432   39516 status.go:255] checking status of ha-058855-m02 ...
	I0429 19:17:16.578823   39516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:17:16.578872   39516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:17:16.594361   39516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0429 19:17:16.594815   39516 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:17:16.595297   39516 main.go:141] libmachine: Using API Version  1
	I0429 19:17:16.595318   39516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:17:16.595626   39516 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:17:16.595805   39516 main.go:141] libmachine: (ha-058855-m02) Calling .GetState
	I0429 19:17:16.597332   39516 status.go:330] ha-058855-m02 host status = "Running" (err=<nil>)
	I0429 19:17:16.597349   39516 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:17:16.597681   39516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:17:16.597712   39516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:17:16.612507   39516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35671
	I0429 19:17:16.612908   39516 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:17:16.613365   39516 main.go:141] libmachine: Using API Version  1
	I0429 19:17:16.613393   39516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:17:16.613660   39516 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:17:16.613826   39516 main.go:141] libmachine: (ha-058855-m02) Calling .GetIP
	I0429 19:17:16.616304   39516 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:17:16.616722   39516 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:12:05 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:17:16.616743   39516 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:17:16.616885   39516 host.go:66] Checking if "ha-058855-m02" exists ...
	I0429 19:17:16.617165   39516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:17:16.617208   39516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:17:16.631222   39516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34445
	I0429 19:17:16.631708   39516 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:17:16.632157   39516 main.go:141] libmachine: Using API Version  1
	I0429 19:17:16.632176   39516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:17:16.632509   39516 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:17:16.632669   39516 main.go:141] libmachine: (ha-058855-m02) Calling .DriverName
	I0429 19:17:16.632855   39516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:17:16.632874   39516 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHHostname
	I0429 19:17:16.635316   39516 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:17:16.635693   39516 main.go:141] libmachine: (ha-058855-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:81:20", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:12:05 +0000 UTC Type:0 Mac:52:54:00:98:81:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-058855-m02 Clientid:01:52:54:00:98:81:20}
	I0429 19:17:16.635724   39516 main.go:141] libmachine: (ha-058855-m02) DBG | domain ha-058855-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:98:81:20 in network mk-ha-058855
	I0429 19:17:16.635902   39516 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHPort
	I0429 19:17:16.636080   39516 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHKeyPath
	I0429 19:17:16.636243   39516 main.go:141] libmachine: (ha-058855-m02) Calling .GetSSHUsername
	I0429 19:17:16.636368   39516 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m02/id_rsa Username:docker}
	I0429 19:17:16.728346   39516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:17:16.747508   39516 kubeconfig.go:125] found "ha-058855" server: "https://192.168.39.254:8443"
	I0429 19:17:16.747536   39516 api_server.go:166] Checking apiserver status ...
	I0429 19:17:16.747564   39516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:17:16.763350   39516 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1391/cgroup
	W0429 19:17:16.774761   39516 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1391/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:17:16.774824   39516 ssh_runner.go:195] Run: ls
	I0429 19:17:16.780055   39516 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0429 19:17:16.784651   39516 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0429 19:17:16.784677   39516 status.go:422] ha-058855-m02 apiserver status = Running (err=<nil>)
	I0429 19:17:16.784687   39516 status.go:257] ha-058855-m02 status: &{Name:ha-058855-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:17:16.784705   39516 status.go:255] checking status of ha-058855-m04 ...
	I0429 19:17:16.785084   39516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:17:16.785129   39516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:17:16.799683   39516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35263
	I0429 19:17:16.800083   39516 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:17:16.800535   39516 main.go:141] libmachine: Using API Version  1
	I0429 19:17:16.800556   39516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:17:16.800859   39516 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:17:16.801068   39516 main.go:141] libmachine: (ha-058855-m04) Calling .GetState
	I0429 19:17:16.802552   39516 status.go:330] ha-058855-m04 host status = "Running" (err=<nil>)
	I0429 19:17:16.802570   39516 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:17:16.802877   39516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:17:16.802918   39516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:17:16.816836   39516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0429 19:17:16.817249   39516 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:17:16.817693   39516 main.go:141] libmachine: Using API Version  1
	I0429 19:17:16.817713   39516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:17:16.817999   39516 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:17:16.818167   39516 main.go:141] libmachine: (ha-058855-m04) Calling .GetIP
	I0429 19:17:16.820770   39516 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:17:16.821177   39516 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:14:42 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:17:16.821205   39516 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:17:16.821330   39516 host.go:66] Checking if "ha-058855-m04" exists ...
	I0429 19:17:16.821649   39516 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:17:16.821685   39516 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:17:16.835582   39516 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36921
	I0429 19:17:16.835909   39516 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:17:16.836304   39516 main.go:141] libmachine: Using API Version  1
	I0429 19:17:16.836328   39516 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:17:16.836620   39516 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:17:16.836845   39516 main.go:141] libmachine: (ha-058855-m04) Calling .DriverName
	I0429 19:17:16.837031   39516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:17:16.837056   39516 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHHostname
	I0429 19:17:16.839754   39516 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:17:16.840135   39516 main.go:141] libmachine: (ha-058855-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:dc", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 20:14:42 +0000 UTC Type:0 Mac:52:54:00:d3:3c:dc Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-058855-m04 Clientid:01:52:54:00:d3:3c:dc}
	I0429 19:17:16.840167   39516 main.go:141] libmachine: (ha-058855-m04) DBG | domain ha-058855-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:d3:3c:dc in network mk-ha-058855
	I0429 19:17:16.840348   39516 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHPort
	I0429 19:17:16.840507   39516 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHKeyPath
	I0429 19:17:16.840672   39516 main.go:141] libmachine: (ha-058855-m04) Calling .GetSSHUsername
	I0429 19:17:16.840801   39516 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855-m04/id_rsa Username:docker}
	W0429 19:17:35.342247   39516 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.119:22: connect: no route to host
	W0429 19:17:35.342348   39516 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.119:22: connect: no route to host
	E0429 19:17:35.342369   39516 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.119:22: connect: no route to host
	I0429 19:17:35.342385   39516 status.go:257] ha-058855-m04 status: &{Name:ha-058855-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0429 19:17:35.342414   39516 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.119:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-058855 -n ha-058855
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-058855 logs -n 25: (1.868307911s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-058855 ssh -n ha-058855-m02 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m03_ha-058855-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04:/home/docker/cp-test_ha-058855-m03_ha-058855-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m04 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m03_ha-058855-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-058855 cp testdata/cp-test.txt                                                | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1826286980/001/cp-test_ha-058855-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855:/home/docker/cp-test_ha-058855-m04_ha-058855.txt                       |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855 sudo cat                                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m04_ha-058855.txt                                 |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m02:/home/docker/cp-test_ha-058855-m04_ha-058855-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m02 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m04_ha-058855-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m03:/home/docker/cp-test_ha-058855-m04_ha-058855-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n                                                                 | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | ha-058855-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-058855 ssh -n ha-058855-m03 sudo cat                                          | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC | 29 Apr 24 19:04 UTC |
	|         | /home/docker/cp-test_ha-058855-m04_ha-058855-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-058855 node stop m02 -v=7                                                     | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-058855 node start m02 -v=7                                                    | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:07 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-058855 -v=7                                                           | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-058855 -v=7                                                                | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-058855 --wait=true -v=7                                                    | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:10 UTC | 29 Apr 24 19:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-058855                                                                | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:14 UTC |                     |
	| node    | ha-058855 node delete m03 -v=7                                                   | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:14 UTC | 29 Apr 24 19:15 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-058855 stop -v=7                                                              | ha-058855 | jenkins | v1.33.0 | 29 Apr 24 19:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 19:10:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 19:10:11.959403   37131 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:10:11.959544   37131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:10:11.959558   37131 out.go:304] Setting ErrFile to fd 2...
	I0429 19:10:11.959580   37131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:10:11.959792   37131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:10:11.960337   37131 out.go:298] Setting JSON to false
	I0429 19:10:11.961341   37131 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3110,"bootTime":1714414702,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 19:10:11.961404   37131 start.go:139] virtualization: kvm guest
	I0429 19:10:11.963766   37131 out.go:177] * [ha-058855] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 19:10:11.965451   37131 notify.go:220] Checking for updates...
	I0429 19:10:11.965462   37131 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:10:11.967025   37131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:10:11.968509   37131 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:10:11.969814   37131 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:10:11.971109   37131 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 19:10:11.972470   37131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:10:11.974405   37131 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:10:11.974553   37131 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:10:11.975119   37131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:10:11.975173   37131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:10:11.989975   37131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44421
	I0429 19:10:11.990440   37131 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:10:11.991047   37131 main.go:141] libmachine: Using API Version  1
	I0429 19:10:11.991075   37131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:10:11.991488   37131 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:10:11.991678   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:10:12.029107   37131 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 19:10:12.030413   37131 start.go:297] selected driver: kvm2
	I0429 19:10:12.030425   37131 start.go:901] validating driver "kvm2" against &{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:10:12.030551   37131 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:10:12.030856   37131 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:10:12.030923   37131 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 19:10:12.046138   37131 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 19:10:12.047024   37131 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:10:12.047108   37131 cni.go:84] Creating CNI manager for ""
	I0429 19:10:12.047127   37131 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0429 19:10:12.047207   37131 start.go:340] cluster config:
	{Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:10:12.047415   37131 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:10:12.049397   37131 out.go:177] * Starting "ha-058855" primary control-plane node in "ha-058855" cluster
	I0429 19:10:12.050731   37131 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:10:12.050776   37131 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 19:10:12.050790   37131 cache.go:56] Caching tarball of preloaded images
	I0429 19:10:12.050875   37131 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 19:10:12.050885   37131 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 19:10:12.051032   37131 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/config.json ...
	I0429 19:10:12.051303   37131 start.go:360] acquireMachinesLock for ha-058855: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:10:12.051354   37131 start.go:364] duration metric: took 26.841µs to acquireMachinesLock for "ha-058855"
	I0429 19:10:12.051376   37131 start.go:96] Skipping create...Using existing machine configuration
	I0429 19:10:12.051384   37131 fix.go:54] fixHost starting: 
	I0429 19:10:12.051633   37131 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:10:12.051663   37131 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:10:12.066341   37131 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I0429 19:10:12.066799   37131 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:10:12.067280   37131 main.go:141] libmachine: Using API Version  1
	I0429 19:10:12.067304   37131 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:10:12.067723   37131 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:10:12.068017   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:10:12.068255   37131 main.go:141] libmachine: (ha-058855) Calling .GetState
	I0429 19:10:12.070340   37131 fix.go:112] recreateIfNeeded on ha-058855: state=Running err=<nil>
	W0429 19:10:12.070376   37131 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 19:10:12.072347   37131 out.go:177] * Updating the running kvm2 "ha-058855" VM ...
	I0429 19:10:12.073574   37131 machine.go:94] provisionDockerMachine start ...
	I0429 19:10:12.073597   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:10:12.073839   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:10:12.076533   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.076984   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.077026   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.077153   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:10:12.077324   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.077485   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.077603   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:10:12.077808   37131 main.go:141] libmachine: Using SSH client type: native
	I0429 19:10:12.078136   37131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 19:10:12.078155   37131 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:10:12.196094   37131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-058855
	
	I0429 19:10:12.196124   37131 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 19:10:12.196362   37131 buildroot.go:166] provisioning hostname "ha-058855"
	I0429 19:10:12.196383   37131 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 19:10:12.196579   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:10:12.199004   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.199382   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.199406   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.199587   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:10:12.199770   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.199933   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.200069   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:10:12.200655   37131 main.go:141] libmachine: Using SSH client type: native
	I0429 19:10:12.200962   37131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 19:10:12.201016   37131 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-058855 && echo "ha-058855" | sudo tee /etc/hostname
	I0429 19:10:12.338170   37131 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-058855
	
	I0429 19:10:12.338200   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:10:12.341036   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.341529   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.341554   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.341766   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:10:12.341962   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.342192   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.342366   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:10:12.342523   37131 main.go:141] libmachine: Using SSH client type: native
	I0429 19:10:12.342679   37131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 19:10:12.342695   37131 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-058855' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-058855/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-058855' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:10:12.455859   37131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:10:12.455894   37131 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:10:12.455944   37131 buildroot.go:174] setting up certificates
	I0429 19:10:12.455962   37131 provision.go:84] configureAuth start
	I0429 19:10:12.455980   37131 main.go:141] libmachine: (ha-058855) Calling .GetMachineName
	I0429 19:10:12.456321   37131 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:10:12.459120   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.459569   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.459599   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.459761   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:10:12.462211   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.462531   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.462583   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.462717   37131 provision.go:143] copyHostCerts
	I0429 19:10:12.462743   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:10:12.462776   37131 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:10:12.462785   37131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:10:12.462846   37131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:10:12.462931   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:10:12.462948   37131 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:10:12.462954   37131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:10:12.462977   37131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:10:12.463030   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:10:12.463045   37131 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:10:12.463049   37131 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:10:12.463069   37131 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:10:12.463158   37131 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.ha-058855 san=[127.0.0.1 192.168.39.52 ha-058855 localhost minikube]
	I0429 19:10:12.575702   37131 provision.go:177] copyRemoteCerts
	I0429 19:10:12.575761   37131 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:10:12.575783   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:10:12.578598   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.578963   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.578992   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.579190   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:10:12.579379   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.579512   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:10:12.579665   37131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:10:12.671290   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 19:10:12.671353   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:10:12.703502   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 19:10:12.703571   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 19:10:12.733519   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 19:10:12.733590   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:10:12.762793   37131 provision.go:87] duration metric: took 306.815027ms to configureAuth
	I0429 19:10:12.762824   37131 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:10:12.763079   37131 config.go:182] Loaded profile config "ha-058855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:10:12.763161   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:10:12.766137   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.766553   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:10:12.766574   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:10:12.766844   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:10:12.767029   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.767189   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:10:12.767405   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:10:12.767561   37131 main.go:141] libmachine: Using SSH client type: native
	I0429 19:10:12.767751   37131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 19:10:12.767781   37131 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:11:43.782036   37131 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:11:43.782095   37131 machine.go:97] duration metric: took 1m31.708503981s to provisionDockerMachine
	I0429 19:11:43.782110   37131 start.go:293] postStartSetup for "ha-058855" (driver="kvm2")
	I0429 19:11:43.782123   37131 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:11:43.782149   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:11:43.782521   37131 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:11:43.782551   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:11:43.785555   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:43.786050   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:43.786099   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:43.786251   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:11:43.786480   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:11:43.786655   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:11:43.786815   37131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:11:43.875377   37131 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:11:43.880400   37131 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:11:43.880430   37131 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:11:43.880510   37131 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:11:43.880596   37131 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:11:43.880611   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /etc/ssl/certs/151242.pem
	I0429 19:11:43.880692   37131 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:11:43.892419   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:11:43.919920   37131 start.go:296] duration metric: took 137.794993ms for postStartSetup
	I0429 19:11:43.919974   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:11:43.920302   37131 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0429 19:11:43.920327   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:11:43.922870   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:43.923308   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:43.923334   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:43.923470   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:11:43.923659   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:11:43.923794   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:11:43.923910   37131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	W0429 19:11:44.010726   37131 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0429 19:11:44.010757   37131 fix.go:56] duration metric: took 1m31.959371993s for fixHost
	I0429 19:11:44.010779   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:11:44.013493   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.013802   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:44.013825   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.014016   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:11:44.014232   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:11:44.014401   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:11:44.014520   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:11:44.014659   37131 main.go:141] libmachine: Using SSH client type: native
	I0429 19:11:44.014838   37131 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.52 22 <nil> <nil>}
	I0429 19:11:44.014851   37131 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:11:44.127408   37131 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714417904.090718173
	
	I0429 19:11:44.127431   37131 fix.go:216] guest clock: 1714417904.090718173
	I0429 19:11:44.127439   37131 fix.go:229] Guest: 2024-04-29 19:11:44.090718173 +0000 UTC Remote: 2024-04-29 19:11:44.010765189 +0000 UTC m=+92.104756440 (delta=79.952984ms)
	I0429 19:11:44.127489   37131 fix.go:200] guest clock delta is within tolerance: 79.952984ms
	I0429 19:11:44.127495   37131 start.go:83] releasing machines lock for "ha-058855", held for 1m32.076131381s
	I0429 19:11:44.127512   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:11:44.127783   37131 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:11:44.130490   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.130842   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:44.130869   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.130981   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:11:44.131519   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:11:44.131693   37131 main.go:141] libmachine: (ha-058855) Calling .DriverName
	I0429 19:11:44.131751   37131 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:11:44.131793   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:11:44.131886   37131 ssh_runner.go:195] Run: cat /version.json
	I0429 19:11:44.131910   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHHostname
	I0429 19:11:44.134359   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.134658   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.134761   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:44.134813   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.134898   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:11:44.135078   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:11:44.135113   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:44.135136   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:44.135239   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:11:44.135294   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHPort
	I0429 19:11:44.135391   37131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:11:44.135452   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHKeyPath
	I0429 19:11:44.135575   37131 main.go:141] libmachine: (ha-058855) Calling .GetSSHUsername
	I0429 19:11:44.135728   37131 sshutil.go:53] new ssh client: &{IP:192.168.39.52 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/ha-058855/id_rsa Username:docker}
	I0429 19:11:44.215552   37131 ssh_runner.go:195] Run: systemctl --version
	I0429 19:11:44.248100   37131 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:11:44.413843   37131 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:11:44.423032   37131 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:11:44.423107   37131 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:11:44.433461   37131 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 19:11:44.433490   37131 start.go:494] detecting cgroup driver to use...
	I0429 19:11:44.433545   37131 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:11:44.451549   37131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:11:44.468006   37131 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:11:44.468072   37131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:11:44.482338   37131 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:11:44.496879   37131 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:11:44.647500   37131 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:11:44.817114   37131 docker.go:233] disabling docker service ...
	I0429 19:11:44.817199   37131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:11:44.840275   37131 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:11:44.857077   37131 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:11:45.017083   37131 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:11:45.173348   37131 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:11:45.190692   37131 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:11:45.212518   37131 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 19:11:45.212578   37131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.224857   37131 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:11:45.224932   37131 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.237597   37131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.250437   37131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.263192   37131 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:11:45.276393   37131 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.290240   37131 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.302922   37131 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:11:45.316757   37131 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:11:45.328755   37131 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:11:45.339973   37131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:11:45.495024   37131 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:11:50.910739   37131 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.415677431s)
	I0429 19:11:50.910773   37131 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:11:50.910828   37131 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:11:50.916703   37131 start.go:562] Will wait 60s for crictl version
	I0429 19:11:50.916757   37131 ssh_runner.go:195] Run: which crictl
	I0429 19:11:50.921257   37131 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:11:50.974084   37131 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:11:50.974153   37131 ssh_runner.go:195] Run: crio --version
	I0429 19:11:51.012909   37131 ssh_runner.go:195] Run: crio --version
	I0429 19:11:51.052247   37131 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 19:11:51.053873   37131 main.go:141] libmachine: (ha-058855) Calling .GetIP
	I0429 19:11:51.056690   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:51.057028   37131 main.go:141] libmachine: (ha-058855) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:0c:a5", ip: ""} in network mk-ha-058855: {Iface:virbr1 ExpiryTime:2024-04-29 19:59:01 +0000 UTC Type:0 Mac:52:54:00:bf:0c:a5 Iaid: IPaddr:192.168.39.52 Prefix:24 Hostname:ha-058855 Clientid:01:52:54:00:bf:0c:a5}
	I0429 19:11:51.057050   37131 main.go:141] libmachine: (ha-058855) DBG | domain ha-058855 has defined IP address 192.168.39.52 and MAC address 52:54:00:bf:0c:a5 in network mk-ha-058855
	I0429 19:11:51.057243   37131 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 19:11:51.062814   37131 kubeadm.go:877] updating cluster {Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 19:11:51.062948   37131 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:11:51.063003   37131 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:11:51.116925   37131 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 19:11:51.116948   37131 crio.go:433] Images already preloaded, skipping extraction
	I0429 19:11:51.117001   37131 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:11:51.159723   37131 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 19:11:51.159746   37131 cache_images.go:84] Images are preloaded, skipping loading
	I0429 19:11:51.159755   37131 kubeadm.go:928] updating node { 192.168.39.52 8443 v1.30.0 crio true true} ...
	I0429 19:11:51.159855   37131 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-058855 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.52
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:11:51.159920   37131 ssh_runner.go:195] Run: crio config
	I0429 19:11:51.222237   37131 cni.go:84] Creating CNI manager for ""
	I0429 19:11:51.222258   37131 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0429 19:11:51.222268   37131 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 19:11:51.222288   37131 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.52 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-058855 NodeName:ha-058855 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.52"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.52 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 19:11:51.222422   37131 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.52
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-058855"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.52
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.52"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 19:11:51.222441   37131 kube-vip.go:115] generating kube-vip config ...
	I0429 19:11:51.222480   37131 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 19:11:51.235971   37131 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 19:11:51.236098   37131 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0429 19:11:51.236153   37131 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:11:51.247404   37131 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 19:11:51.247498   37131 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 19:11:51.259260   37131 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 19:11:51.279278   37131 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:11:51.296770   37131 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0429 19:11:51.315766   37131 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0429 19:11:51.335464   37131 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0429 19:11:51.341180   37131 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:11:51.505378   37131 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:11:51.523230   37131 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855 for IP: 192.168.39.52
	I0429 19:11:51.523251   37131 certs.go:194] generating shared ca certs ...
	I0429 19:11:51.523265   37131 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:11:51.523431   37131 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:11:51.523498   37131 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:11:51.523512   37131 certs.go:256] generating profile certs ...
	I0429 19:11:51.523600   37131 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/client.key
	I0429 19:11:51.523637   37131 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.b5b24e72
	I0429 19:11:51.523658   37131 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.b5b24e72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.52 192.168.39.27 192.168.39.215 192.168.39.254]
	I0429 19:11:52.043059   37131 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.b5b24e72 ...
	I0429 19:11:52.043088   37131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.b5b24e72: {Name:mk2d26705800526e7e28daf478b103ebbe86ff77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:11:52.043250   37131 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.b5b24e72 ...
	I0429 19:11:52.043279   37131 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.b5b24e72: {Name:mk9e91764a777ba5e6b2e2f3d743a8444b123491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:11:52.043355   37131 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt.b5b24e72 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt
	I0429 19:11:52.043507   37131 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key.b5b24e72 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key
	I0429 19:11:52.043637   37131 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key
	I0429 19:11:52.043652   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:11:52.043664   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:11:52.043677   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:11:52.043689   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:11:52.043701   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:11:52.043713   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:11:52.043730   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:11:52.043751   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:11:52.043802   37131 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:11:52.043829   37131 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:11:52.043839   37131 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:11:52.043860   37131 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:11:52.043886   37131 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:11:52.043906   37131 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:11:52.043940   37131 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:11:52.043965   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem -> /usr/share/ca-certificates/15124.pem
	I0429 19:11:52.043978   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /usr/share/ca-certificates/151242.pem
	I0429 19:11:52.043990   37131 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:11:52.044571   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:11:52.076136   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:11:52.104116   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:11:52.131069   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:11:52.159412   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0429 19:11:52.188172   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 19:11:52.214965   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:11:52.242255   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/ha-058855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 19:11:52.269831   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:11:52.296844   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:11:52.324587   37131 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:11:52.352226   37131 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 19:11:52.371652   37131 ssh_runner.go:195] Run: openssl version
	I0429 19:11:52.378694   37131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:11:52.391448   37131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:11:52.397067   37131 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:11:52.397115   37131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:11:52.403690   37131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:11:52.414917   37131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:11:52.427327   37131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:11:52.432800   37131 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:11:52.432858   37131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:11:52.439978   37131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:11:52.450188   37131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:11:52.461760   37131 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:11:52.467142   37131 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:11:52.467212   37131 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:11:52.473502   37131 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:11:52.484335   37131 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:11:52.489814   37131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 19:11:52.496946   37131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 19:11:52.503377   37131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 19:11:52.509977   37131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 19:11:52.516805   37131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 19:11:52.523219   37131 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 19:11:52.529815   37131 kubeadm.go:391] StartCluster: {Name:ha-058855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-058855 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.52 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:11:52.529971   37131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 19:11:52.530028   37131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 19:11:52.571928   37131 cri.go:89] found id: "456e3c46605c79ee57cb93c23059d5f03ffaa307a1bde9a358e8dbf26733090b"
	I0429 19:11:52.571955   37131 cri.go:89] found id: "dc0361e8b66dd1248ecd1214f6b9fa96a060ba135ef3bd13e16b7119c7a30299"
	I0429 19:11:52.571961   37131 cri.go:89] found id: "f89d1200b589323095b891ded44d0f39b5d9d304183f973762186b00994f3cbf"
	I0429 19:11:52.571966   37131 cri.go:89] found id: "09573684ce4866f26fe6dc7ca6f3016d7610603eb5aed63c3c620c2f9a2e95d6"
	I0429 19:11:52.571970   37131 cri.go:89] found id: "c7318d57848f144b2bb27a1ee912ec5726a3996ab5d9a75712fcd8120d1c41df"
	I0429 19:11:52.571974   37131 cri.go:89] found id: "6d85e15a41334e0f49396a7c8783334a7d5e05b649146b665d0437111bf89ade"
	I0429 19:11:52.571978   37131 cri.go:89] found id: "35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b"
	I0429 19:11:52.571982   37131 cri.go:89] found id: "db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe"
	I0429 19:11:52.571986   37131 cri.go:89] found id: "2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5"
	I0429 19:11:52.571993   37131 cri.go:89] found id: "45ced81842ab99aabac98f2ac5d6e1b110a73465d11e56c87d6166d153839862"
	I0429 19:11:52.571997   37131 cri.go:89] found id: "3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad"
	I0429 19:11:52.572001   37131 cri.go:89] found id: "d9513857b60ae4b75efae6de6be9d83d589f9d511ba539d01bc7e371a1a0e090"
	I0429 19:11:52.572008   37131 cri.go:89] found id: "d9139aba22c80eaaf47d55790db8284fc4c3d959ba23904a36880d4d936f4622"
	I0429 19:11:52.572013   37131 cri.go:89] found id: "f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067"
	I0429 19:11:52.572020   37131 cri.go:89] found id: ""
	I0429 19:11:52.572068   37131 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.037552698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714418256037523338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fadca899-c5f5-4863-8e9f-ce4a0fed32ac name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.038449324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3b6e369-c78e-4cea-8ecc-a299dad74a55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.038512488Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3b6e369-c78e-4cea-8ecc-a299dad74a55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.039195760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e191e297281741021e5309da12023e898fb42af47a910b5296fca453cf3a59a9,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714418014575610666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d56e42bb62f0802b29ab5431bfe35a9c4ed282bef23cd07745fd552f016a0c2,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714417998584511876,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dcb7268514a41d84040496fb3f97dd604c39d860db3795b1f536f6388d6c11,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417960583418252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b59ec3dc1e29a4c89fb2d40bf1cb3db18358c929912c01f77801025c117736f,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417958577543599,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49055a12b83d77f6453880eea876f9f8827a406c542e2fae249a50e1417f0583,PodSandboxId:c5f248cdad0a4e0c612e6124cf1ec86f5f5e7e51c8195186b1dae72669e820eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417950948693189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cc8a93682bcdac3c74aabfaf7ac1a16386d5e52b357267a4354a32e4789709,PodSandboxId:19446d08654e14ba0fc1823d9b4dad71e2457cd842f2b4237041e278acb314a5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417928149728501,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0101d9bfd28f4f64a2207189ca2952df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714417919017837307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3234c6a2a02115d1a2b3c8db09477d14fa780e263e04d16a673863bdef318b03,PodSandboxId:1981e51a60fc9bfd1a839f81ae9faf09c9556e372755305615281483a1187fc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417917587343991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa254f41
7bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714417917767697912,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b73fc09f93dd22fd87a22dc40dbad619e67ea8a27
b8e20dcf601f5e0f7ddcb,PodSandboxId:48b8b3bb4968f7483eebf06032b1a8accab07811f969d5231f87a2ccf2c7127f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917914910181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080f231760b7719587b43a8121d8b9e314e646c9be91cd1843e6879b061326ac,PodSandboxId:54d8909c7a920e28849cf9c10442ef50f0faf48e265fd2fa2c1fa044f97f7e93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917809121425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02cf56519f638778caaaa8342593494ae6cecd78d3a8f6122ae98be89f810dae,PodSandboxId:720fc0053e31cfbb6f1170c0811bbea3d7a92267a445f2f9096e17724c461b24,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417917657039067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53824
70eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3bfc6bba83dd30bc001418918d12a37f07affec561132fc8a6bfd32f7fca8c,PodSandboxId:6ff12ce46f5f84dfc87db5bb207fbd9e412ab6d9f83e04aec492de99a510cd30,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417917436371922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f21f1cfa42f5dc7250d4b936ccac831fb3c1028e1832fef69bf664596a8c441,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714417917519326975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes
.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3212de69ac372cf90c1735c062daa36d336d730750901cd5fb573b42df375e,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714417917398524057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kuber
netes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714417414458938341,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kuberne
tes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187512933738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187459716216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714417184691606405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714417163290641629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714417163188867021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3b6e369-c78e-4cea-8ecc-a299dad74a55 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.086115716Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c8efe4e-2c25-4a37-bd75-93c5084c9493 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.086196893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c8efe4e-2c25-4a37-bd75-93c5084c9493 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.087393404Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=577ff469-0194-4748-b650-4e8d119b4944 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.088008536Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714418256087974339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=577ff469-0194-4748-b650-4e8d119b4944 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.088544469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25df2a4d-fe81-44f9-b15c-debc4004bdfa name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.088601963Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25df2a4d-fe81-44f9-b15c-debc4004bdfa name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.089197589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e191e297281741021e5309da12023e898fb42af47a910b5296fca453cf3a59a9,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714418014575610666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d56e42bb62f0802b29ab5431bfe35a9c4ed282bef23cd07745fd552f016a0c2,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714417998584511876,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dcb7268514a41d84040496fb3f97dd604c39d860db3795b1f536f6388d6c11,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417960583418252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b59ec3dc1e29a4c89fb2d40bf1cb3db18358c929912c01f77801025c117736f,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417958577543599,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49055a12b83d77f6453880eea876f9f8827a406c542e2fae249a50e1417f0583,PodSandboxId:c5f248cdad0a4e0c612e6124cf1ec86f5f5e7e51c8195186b1dae72669e820eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417950948693189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cc8a93682bcdac3c74aabfaf7ac1a16386d5e52b357267a4354a32e4789709,PodSandboxId:19446d08654e14ba0fc1823d9b4dad71e2457cd842f2b4237041e278acb314a5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417928149728501,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0101d9bfd28f4f64a2207189ca2952df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714417919017837307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3234c6a2a02115d1a2b3c8db09477d14fa780e263e04d16a673863bdef318b03,PodSandboxId:1981e51a60fc9bfd1a839f81ae9faf09c9556e372755305615281483a1187fc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417917587343991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa254f41
7bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714417917767697912,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b73fc09f93dd22fd87a22dc40dbad619e67ea8a27
b8e20dcf601f5e0f7ddcb,PodSandboxId:48b8b3bb4968f7483eebf06032b1a8accab07811f969d5231f87a2ccf2c7127f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917914910181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080f231760b7719587b43a8121d8b9e314e646c9be91cd1843e6879b061326ac,PodSandboxId:54d8909c7a920e28849cf9c10442ef50f0faf48e265fd2fa2c1fa044f97f7e93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917809121425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02cf56519f638778caaaa8342593494ae6cecd78d3a8f6122ae98be89f810dae,PodSandboxId:720fc0053e31cfbb6f1170c0811bbea3d7a92267a445f2f9096e17724c461b24,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417917657039067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53824
70eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3bfc6bba83dd30bc001418918d12a37f07affec561132fc8a6bfd32f7fca8c,PodSandboxId:6ff12ce46f5f84dfc87db5bb207fbd9e412ab6d9f83e04aec492de99a510cd30,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417917436371922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f21f1cfa42f5dc7250d4b936ccac831fb3c1028e1832fef69bf664596a8c441,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714417917519326975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes
.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3212de69ac372cf90c1735c062daa36d336d730750901cd5fb573b42df375e,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714417917398524057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kuber
netes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714417414458938341,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kuberne
tes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187512933738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187459716216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714417184691606405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714417163290641629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714417163188867021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25df2a4d-fe81-44f9-b15c-debc4004bdfa name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.157551382Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8eaaf126-ede9-4aac-ba95-87f7167a18b3 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.157687921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8eaaf126-ede9-4aac-ba95-87f7167a18b3 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.159099267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=589b8e6e-8c26-461b-a469-00ccf4639e5f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.159524768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714418256159502840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=589b8e6e-8c26-461b-a469-00ccf4639e5f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.160161677Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f98d5399-67ef-4712-ae81-3703f1b30370 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.160219886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f98d5399-67ef-4712-ae81-3703f1b30370 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.160668897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e191e297281741021e5309da12023e898fb42af47a910b5296fca453cf3a59a9,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714418014575610666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d56e42bb62f0802b29ab5431bfe35a9c4ed282bef23cd07745fd552f016a0c2,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714417998584511876,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dcb7268514a41d84040496fb3f97dd604c39d860db3795b1f536f6388d6c11,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417960583418252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b59ec3dc1e29a4c89fb2d40bf1cb3db18358c929912c01f77801025c117736f,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417958577543599,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49055a12b83d77f6453880eea876f9f8827a406c542e2fae249a50e1417f0583,PodSandboxId:c5f248cdad0a4e0c612e6124cf1ec86f5f5e7e51c8195186b1dae72669e820eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417950948693189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cc8a93682bcdac3c74aabfaf7ac1a16386d5e52b357267a4354a32e4789709,PodSandboxId:19446d08654e14ba0fc1823d9b4dad71e2457cd842f2b4237041e278acb314a5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417928149728501,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0101d9bfd28f4f64a2207189ca2952df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714417919017837307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3234c6a2a02115d1a2b3c8db09477d14fa780e263e04d16a673863bdef318b03,PodSandboxId:1981e51a60fc9bfd1a839f81ae9faf09c9556e372755305615281483a1187fc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417917587343991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa254f41
7bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714417917767697912,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b73fc09f93dd22fd87a22dc40dbad619e67ea8a27
b8e20dcf601f5e0f7ddcb,PodSandboxId:48b8b3bb4968f7483eebf06032b1a8accab07811f969d5231f87a2ccf2c7127f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917914910181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080f231760b7719587b43a8121d8b9e314e646c9be91cd1843e6879b061326ac,PodSandboxId:54d8909c7a920e28849cf9c10442ef50f0faf48e265fd2fa2c1fa044f97f7e93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917809121425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02cf56519f638778caaaa8342593494ae6cecd78d3a8f6122ae98be89f810dae,PodSandboxId:720fc0053e31cfbb6f1170c0811bbea3d7a92267a445f2f9096e17724c461b24,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417917657039067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53824
70eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3bfc6bba83dd30bc001418918d12a37f07affec561132fc8a6bfd32f7fca8c,PodSandboxId:6ff12ce46f5f84dfc87db5bb207fbd9e412ab6d9f83e04aec492de99a510cd30,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417917436371922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f21f1cfa42f5dc7250d4b936ccac831fb3c1028e1832fef69bf664596a8c441,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714417917519326975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes
.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3212de69ac372cf90c1735c062daa36d336d730750901cd5fb573b42df375e,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714417917398524057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kuber
netes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714417414458938341,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kuberne
tes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187512933738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187459716216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714417184691606405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714417163290641629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714417163188867021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f98d5399-67ef-4712-ae81-3703f1b30370 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.206725674Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d7e9aa08-acd2-4daa-b806-1d4865250e35 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.206952597Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d7e9aa08-acd2-4daa-b806-1d4865250e35 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.208586275Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a563f1e9-4674-472d-ac95-007a431b0ec0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.209094862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714418256209066946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a563f1e9-4674-472d-ac95-007a431b0ec0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.210274762Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9da9e217-a290-464c-bc54-ebc2f91a972b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.210337080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9da9e217-a290-464c-bc54-ebc2f91a972b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:17:36 ha-058855 crio[4018]: time="2024-04-29 19:17:36.210732811Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e191e297281741021e5309da12023e898fb42af47a910b5296fca453cf3a59a9,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714418014575610666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d56e42bb62f0802b29ab5431bfe35a9c4ed282bef23cd07745fd552f016a0c2,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714417998584511876,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31dcb7268514a41d84040496fb3f97dd604c39d860db3795b1f536f6388d6c11,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714417960583418252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b59ec3dc1e29a4c89fb2d40bf1cb3db18358c929912c01f77801025c117736f,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714417958577543599,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49055a12b83d77f6453880eea876f9f8827a406c542e2fae249a50e1417f0583,PodSandboxId:c5f248cdad0a4e0c612e6124cf1ec86f5f5e7e51c8195186b1dae72669e820eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714417950948693189,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kubernetes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68cc8a93682bcdac3c74aabfaf7ac1a16386d5e52b357267a4354a32e4789709,PodSandboxId:19446d08654e14ba0fc1823d9b4dad71e2457cd842f2b4237041e278acb314a5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1714417928149728501,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0101d9bfd28f4f64a2207189ca2952df,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791,PodSandboxId:ac8d70341e488c3dc6fb79eb786a28853f0e954c415117ddf6aaa174af011df7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714417919017837307,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1572f7da-1bda-4b9e-a5fc-315aae3ba592,},Annotations:map[string]string{io.kubernetes.container.hash: e7a60423,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:3234c6a2a02115d1a2b3c8db09477d14fa780e263e04d16a673863bdef318b03,PodSandboxId:1981e51a60fc9bfd1a839f81ae9faf09c9556e372755305615281483a1187fc7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714417917587343991,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa254f41
7bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc,PodSandboxId:fbe987603e4ff0ce442afdabd78afaafad0e1afd468a4c28cc63d29edd3b0334,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714417917767697912,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j42cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13d10343-b59f-490f-ac7c-973271cc27d2,},Annotations:map[string]string{io.kubernetes.container.hash: 66f7810a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86b73fc09f93dd22fd87a22dc40dbad619e67ea8a27
b8e20dcf601f5e0f7ddcb,PodSandboxId:48b8b3bb4968f7483eebf06032b1a8accab07811f969d5231f87a2ccf2c7127f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917914910181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080f231760b7719587b43a8121d8b9e314e646c9be91cd1843e6879b061326ac,PodSandboxId:54d8909c7a920e28849cf9c10442ef50f0faf48e265fd2fa2c1fa044f97f7e93,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714417917809121425,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02cf56519f638778caaaa8342593494ae6cecd78d3a8f6122ae98be89f810dae,PodSandboxId:720fc0053e31cfbb6f1170c0811bbea3d7a92267a445f2f9096e17724c461b24,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714417917657039067,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53824
70eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae3bfc6bba83dd30bc001418918d12a37f07affec561132fc8a6bfd32f7fca8c,PodSandboxId:6ff12ce46f5f84dfc87db5bb207fbd9e412ab6d9f83e04aec492de99a510cd30,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714417917436371922,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[strin
g]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f21f1cfa42f5dc7250d4b936ccac831fb3c1028e1832fef69bf664596a8c441,PodSandboxId:4c1f41849f6cc32d06159c9e5724d6f96b1b2eb73d0948b48f17cc00a8942ca4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714417917519326975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af5ae94dd6fa640c6a87e1b677ca6ae6,},Annotations:map[string]string{io.kubernetes
.container.hash: 23ced2a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d3212de69ac372cf90c1735c062daa36d336d730750901cd5fb573b42df375e,PodSandboxId:e82216028935bcebe836b8d2c3c7fe3ba787966bd1f006f32db2a5998b7d07b9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714417917398524057,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d92703a0d641b881a7039575606286,},Annotations:map[string]string{io.kuber
netes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ebcb4aac0715c790071e01d8a0ab4c046bbabd0dcf6575d7359812f4f1b74b8,PodSandboxId:5d6b9a26ffca45bdcb5b201275498d7a7efa4e0ec59e8d6c751c6d37ca70dc19,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714417414458938341,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-nst7c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e810c83c-cdd7-4072-b8e8-319fd5aa4daa,},Annotations:map[string]string{io.kuberne
tes.container.hash: 84dbc699,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b,PodSandboxId:27fc4fec5e3f0677051bec1031fa1643b62c7855e175500fdf7909f4773e4475,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187512933738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-njch8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823d223d-f7bd-4b9c-bdd9-8d0ae063d449,},Annotations:map[string]string{io.kubernetes.container.hash: c81cd755,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe,PodSandboxId:1050f7bafa98e43fafa6ca370c7d5b4671f150c2dbd9685dcc82049951670a2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714417187459716216,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-bbq9x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a016fbf8-4a91-4f2f-97da-44b6e2195885,},Annotations:map[string]string{io.kubernetes.container.hash: 73fcb892,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5,PodSandboxId:fe7fa96de2987f048de05261597baa551deaea62f6048ef61f5da9b8fb6322d9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf4
31fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714417184691606405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xldlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a01564cb-ea76-4cc5-abad-d2d70b79bf6d,},Annotations:map[string]string{io.kubernetes.container.hash: b56fdbb0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad,PodSandboxId:eaa9cff42f55b50dc050182b56a3a066099371cefd0e58ab89dea9abac494857,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929d
c8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714417163290641629,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5382470eaba9fa40c319c5aaf393ee38,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067,PodSandboxId:40b3f5ad731ff2887930a2bd8a804c02d5877813b8e208a705b0781b92ca7c8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1714417163188867021,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-058855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd8cbd0a146b4ae041fb7271005e1408,},Annotations:map[string]string{io.kubernetes.container.hash: ab038a36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9da9e217-a290-464c-bc54-ebc2f91a972b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e191e29728174       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       5                   ac8d70341e488       storage-provisioner
	7d56e42bb62f0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               4                   fbe987603e4ff       kindnet-j42cd
	31dcb7268514a       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      4 minutes ago       Running             kube-controller-manager   2                   e82216028935b       kube-controller-manager-ha-058855
	3b59ec3dc1e29       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      4 minutes ago       Running             kube-apiserver            3                   4c1f41849f6cc       kube-apiserver-ha-058855
	49055a12b83d7       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   c5f248cdad0a4       busybox-fc5497c4f-nst7c
	68cc8a93682bc       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   19446d08654e1       kube-vip-ha-058855
	2ca11a172d18b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       4                   ac8d70341e488       storage-provisioner
	86b73fc09f93d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   48b8b3bb4968f       coredns-7db6d8ff4d-bbq9x
	080f231760b77       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   54d8909c7a920       coredns-7db6d8ff4d-njch8
	aa254f417bd8c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               3                   fbe987603e4ff       kindnet-j42cd
	02cf56519f638       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      5 minutes ago       Running             kube-scheduler            1                   720fc0053e31c       kube-scheduler-ha-058855
	3234c6a2a0211       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                1                   1981e51a60fc9       kube-proxy-xldlc
	8f21f1cfa42f5       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Exited              kube-apiserver            2                   4c1f41849f6cc       kube-apiserver-ha-058855
	ae3bfc6bba83d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   6ff12ce46f5f8       etcd-ha-058855
	0d3212de69ac3       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      5 minutes ago       Exited              kube-controller-manager   1                   e82216028935b       kube-controller-manager-ha-058855
	3ebcb4aac0715       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago      Exited              busybox                   0                   5d6b9a26ffca4       busybox-fc5497c4f-nst7c
	35b38d136f10c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   27fc4fec5e3f0       coredns-7db6d8ff4d-njch8
	db099f7f56f78       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   1050f7bafa98e       coredns-7db6d8ff4d-bbq9x
	2e3b2e1683b77       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      17 minutes ago      Exited              kube-proxy                0                   fe7fa96de2987       kube-proxy-xldlc
	3c1cf7e86cc05       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      18 minutes ago      Exited              kube-scheduler            0                   eaa9cff42f55b       kube-scheduler-ha-058855
	f653b7a6c4efb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      18 minutes ago      Exited              etcd                      0                   40b3f5ad731ff       etcd-ha-058855
	
	
	==> coredns [080f231760b7719587b43a8121d8b9e314e646c9be91cd1843e6879b061326ac] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40728->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40728->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40712->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:40712->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40706->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40706->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [35b38d136f10c7b5d07cfbf1af9446bf3f94e5a9c75b0bcc62697e1974ff6a5b] <==
	[INFO] 10.244.1.2:46625 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114006s
	[INFO] 10.244.1.2:57265 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118743s
	[INFO] 10.244.1.2:34075 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000376654s
	[INFO] 10.244.1.2:37316 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000287017s
	[INFO] 10.244.2.2:55857 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000148708s
	[INFO] 10.244.2.2:34046 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114435s
	[INFO] 10.244.2.2:59123 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013463s
	[INFO] 10.244.0.4:52788 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139069s
	[INFO] 10.244.0.4:54898 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174069s
	[INFO] 10.244.0.4:50441 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004412s
	[INFO] 10.244.1.2:34029 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183007s
	[INFO] 10.244.1.2:34413 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011174s
	[INFO] 10.244.1.2:46424 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000144489s
	[INFO] 10.244.1.2:35983 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116269s
	[INFO] 10.244.2.2:36513 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000459857s
	[INFO] 10.244.0.4:40033 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000351605s
	[INFO] 10.244.0.4:45496 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128261s
	[INFO] 10.244.1.2:58777 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000204086s
	[INFO] 10.244.2.2:46697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000227863s
	[INFO] 10.244.2.2:60992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138077s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [86b73fc09f93dd22fd87a22dc40dbad619e67ea8a27b8e20dcf601f5e0f7ddcb] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38004->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38004->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38026->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38026->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38012->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38012->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [db099f7f56f78ae9c18a014d6610a5e8753f5040e3e16640ff1ed7d3ab2346fe] <==
	[INFO] 10.244.0.4:38237 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178889s
	[INFO] 10.244.1.2:51028 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000274871s
	[INFO] 10.244.1.2:44471 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001330026s
	[INFO] 10.244.1.2:42432 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122996s
	[INFO] 10.244.2.2:59580 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000294012s
	[INFO] 10.244.2.2:60659 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00179161s
	[INFO] 10.244.2.2:39549 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000317743s
	[INFO] 10.244.2.2:43315 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001176961s
	[INFO] 10.244.2.2:32992 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190177s
	[INFO] 10.244.0.4:46409 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000047581s
	[INFO] 10.244.2.2:53037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141835s
	[INFO] 10.244.2.2:44640 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000203835s
	[INFO] 10.244.2.2:58171 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090591s
	[INFO] 10.244.0.4:44158 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106787s
	[INFO] 10.244.0.4:57643 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000199048s
	[INFO] 10.244.1.2:57285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127384s
	[INFO] 10.244.1.2:53223 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000223061s
	[INFO] 10.244.1.2:54113 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106292s
	[INFO] 10.244.2.2:57470 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012081s
	[INFO] 10.244.2.2:35174 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139962s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=19, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-058855
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T18_59_30_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 18:59:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:17:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:12:41 +0000   Mon, 29 Apr 2024 18:59:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:12:41 +0000   Mon, 29 Apr 2024 18:59:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:12:41 +0000   Mon, 29 Apr 2024 18:59:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:12:41 +0000   Mon, 29 Apr 2024 18:59:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.52
	  Hostname:    ha-058855
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4dd245ae2fbf4ffeb364af3ff6801808
	  System UUID:                4dd245ae-2fbf-4ffe-b364-af3ff6801808
	  Boot ID:                    41ab0acc-a7d3-4500-bada-adc41451a660
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nst7c              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-bbq9x             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-njch8             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-058855                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-j42cd                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-058855             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-058855    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-xldlc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-058855             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-058855                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 17m                  kube-proxy       
	  Normal   Starting                 4m55s                kube-proxy       
	  Normal   NodeHasNoDiskPressure    18m                  kubelet          Node ha-058855 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 18m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  18m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  18m                  kubelet          Node ha-058855 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     18m                  kubelet          Node ha-058855 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                  node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Normal   NodeReady                17m                  kubelet          Node ha-058855 status is now: NodeReady
	  Normal   RegisteredNode           15m                  node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Warning  ContainerGCFailed        6m7s (x2 over 7m7s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m43s                node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Normal   RegisteredNode           4m42s                node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	  Normal   RegisteredNode           3m7s                 node-controller  Node ha-058855 event: Registered Node ha-058855 in Controller
	
	
	Name:               ha-058855-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_01_50_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:01:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:17:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:13:26 +0000   Mon, 29 Apr 2024 19:12:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:13:26 +0000   Mon, 29 Apr 2024 19:12:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:13:26 +0000   Mon, 29 Apr 2024 19:12:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:13:26 +0000   Mon, 29 Apr 2024 19:12:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-058855-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea727b7dfb674d998bb0a6c08dea140b
	  System UUID:                ea727b7d-fb67-4d99-8bb0-a6c08dea140b
	  Boot ID:                    8e31da5f-4ee6-43d7-b240-df0366f65859
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pr84n                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-058855-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-xdtp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-058855-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-058855-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-nz2rv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-058855-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-058855-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m45s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-058855-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-058855-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-058855-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-058855-m02 status is now: NodeNotReady
	  Normal  Starting                 5m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node ha-058855-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node ha-058855-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x7 over 5m19s)  kubelet          Node ha-058855-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m43s                  node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  RegisteredNode           4m42s                  node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	  Normal  RegisteredNode           3m7s                   node-controller  Node ha-058855-m02 event: Registered Node ha-058855-m02 in Controller
	
	
	Name:               ha-058855-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-058855-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-058855
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_04_09_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:04:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-058855-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:15:08 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 19:14:47 +0000   Mon, 29 Apr 2024 19:15:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 19:14:47 +0000   Mon, 29 Apr 2024 19:15:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 19:14:47 +0000   Mon, 29 Apr 2024 19:15:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 19:14:47 +0000   Mon, 29 Apr 2024 19:15:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    ha-058855-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbc9ec7037144061a802010c8eaa7400
	  System UUID:                fbc9ec70-3714-4061-a802-010c8eaa7400
	  Boot ID:                    5e7b908f-742d-4b2a-be01-e8237f91389e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s4p26    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-8mzbn              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-7qjvk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x3 over 13m)      kubelet          Node ha-058855-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)      kubelet          Node ha-058855-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)      kubelet          Node ha-058855-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-058855-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m43s                  node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal   RegisteredNode           4m42s                  node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal   RegisteredNode           3m7s                   node-controller  Node ha-058855-m04 event: Registered Node ha-058855-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m49s (x2 over 2m49s)  kubelet          Node ha-058855-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m49s (x2 over 2m49s)  kubelet          Node ha-058855-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x2 over 2m49s)  kubelet          Node ha-058855-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s                  kubelet          Node ha-058855-m04 has been rebooted, boot id: 5e7b908f-742d-4b2a-be01-e8237f91389e
	  Normal   NodeReady                2m49s                  kubelet          Node ha-058855-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s (x2 over 4m3s)    node-controller  Node ha-058855-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.063053] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066472] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.176661] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.148881] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.312890] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.946074] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.072175] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.019108] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +1.004098] kauditd_printk_skb: 67 callbacks suppressed
	[  +6.172368] systemd-fstab-generator[1369]: Ignoring "noauto" option for root device
	[  +0.079206] kauditd_printk_skb: 30 callbacks suppressed
	[ +15.239291] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.268922] kauditd_printk_skb: 74 callbacks suppressed
	[Apr29 19:08] kauditd_printk_skb: 1 callbacks suppressed
	[Apr29 19:11] systemd-fstab-generator[3936]: Ignoring "noauto" option for root device
	[  +0.161031] systemd-fstab-generator[3948]: Ignoring "noauto" option for root device
	[  +0.208562] systemd-fstab-generator[3962]: Ignoring "noauto" option for root device
	[  +0.161312] systemd-fstab-generator[3974]: Ignoring "noauto" option for root device
	[  +0.320141] systemd-fstab-generator[4002]: Ignoring "noauto" option for root device
	[  +5.999485] systemd-fstab-generator[4104]: Ignoring "noauto" option for root device
	[  +0.096716] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.516553] kauditd_printk_skb: 12 callbacks suppressed
	[Apr29 19:12] kauditd_printk_skb: 87 callbacks suppressed
	[ +30.542820] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.806347] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [ae3bfc6bba83dd30bc001418918d12a37f07affec561132fc8a6bfd32f7fca8c] <==
	{"level":"info","ts":"2024-04-29T19:14:09.775212Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:14:09.778635Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:14:09.807532Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3baf479dc31b93a9","to":"51d96a7d7a2ba286","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-29T19:14:09.807619Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:14:09.811457Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3baf479dc31b93a9","to":"51d96a7d7a2ba286","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-29T19:14:09.811541Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"warn","ts":"2024-04-29T19:14:09.817578Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.215:44990","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-04-29T19:15:01.82843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3baf479dc31b93a9 switched to configuration voters=(4300734912070914985 7986791538629166505)"}
	{"level":"info","ts":"2024-04-29T19:15:01.832078Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"26c9414d925de00c","local-member-id":"3baf479dc31b93a9","removed-remote-peer-id":"51d96a7d7a2ba286","removed-remote-peer-urls":["https://192.168.39.215:2380"]}
	{"level":"info","ts":"2024-04-29T19:15:01.83228Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"warn","ts":"2024-04-29T19:15:01.832631Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:15:01.8327Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"warn","ts":"2024-04-29T19:15:01.833126Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:15:01.833191Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:15:01.833412Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"warn","ts":"2024-04-29T19:15:01.83357Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286","error":"context canceled"}
	{"level":"warn","ts":"2024-04-29T19:15:01.833653Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"51d96a7d7a2ba286","error":"failed to read 51d96a7d7a2ba286 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-29T19:15:01.833714Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"warn","ts":"2024-04-29T19:15:01.83398Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286","error":"context canceled"}
	{"level":"info","ts":"2024-04-29T19:15:01.834048Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:15:01.834092Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:15:01.834136Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"3baf479dc31b93a9","removed-remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:15:01.834392Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"3baf479dc31b93a9","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"51d96a7d7a2ba286"}
	{"level":"warn","ts":"2024-04-29T19:15:01.870051Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"3baf479dc31b93a9","remote-peer-id-stream-handler":"3baf479dc31b93a9","remote-peer-id-from":"51d96a7d7a2ba286"}
	{"level":"warn","ts":"2024-04-29T19:15:01.873612Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"3baf479dc31b93a9","remote-peer-id-stream-handler":"3baf479dc31b93a9","remote-peer-id-from":"51d96a7d7a2ba286"}
	
	
	==> etcd [f653b7a6c4efb7fbee66706b43803366daf75ca577cfa836a689118b96d4a067] <==
	2024/04/29 19:10:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-29T19:10:12.935884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.693648901s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-04-29T19:10:12.935899Z","caller":"traceutil/trace.go:171","msg":"trace[1300071160] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; }","duration":"7.693677573s","start":"2024-04-29T19:10:05.242217Z","end":"2024-04-29T19:10:12.935895Z","steps":["trace[1300071160] 'agreement among raft nodes before linearized reading'  (duration: 7.693657615s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T19:10:12.935918Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T19:10:05.242213Z","time spent":"7.693697226s","remote":"127.0.0.1:57172","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:10000 "}
	2024/04/29 19:10:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-29T19:10:12.970694Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.52:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T19:10:12.970816Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.52:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T19:10:12.970991Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3baf479dc31b93a9","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-29T19:10:12.971277Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971327Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971398Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971552Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971614Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971658Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3baf479dc31b93a9","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971692Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6ed6c896ab1645a9"}
	{"level":"info","ts":"2024-04-29T19:10:12.971701Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.97171Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.971732Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.971955Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.971989Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.972018Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3baf479dc31b93a9","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.972058Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"51d96a7d7a2ba286"}
	{"level":"info","ts":"2024-04-29T19:10:12.975995Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2024-04-29T19:10:12.976324Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.52:2380"}
	{"level":"info","ts":"2024-04-29T19:10:12.976408Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-058855","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.52:2380"],"advertise-client-urls":["https://192.168.39.52:2379"]}
	
	
	==> kernel <==
	 19:17:36 up 18 min,  0 users,  load average: 0.28, 0.32, 0.27
	Linux ha-058855 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7d56e42bb62f0802b29ab5431bfe35a9c4ed282bef23cd07745fd552f016a0c2] <==
	I0429 19:16:50.176507       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:17:00.193843       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:17:00.193930       1 main.go:227] handling current node
	I0429 19:17:00.193955       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:17:00.193973       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:17:00.194080       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:17:00.194099       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:17:10.207089       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:17:10.207191       1 main.go:227] handling current node
	I0429 19:17:10.207219       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:17:10.207240       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:17:10.207376       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:17:10.207407       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:17:20.338130       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:17:20.338254       1 main.go:227] handling current node
	I0429 19:17:20.338271       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:17:20.338278       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:17:20.338699       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:17:20.338735       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	I0429 19:17:30.346852       1 main.go:223] Handling node with IPs: map[192.168.39.52:{}]
	I0429 19:17:30.346944       1 main.go:227] handling current node
	I0429 19:17:30.346968       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0429 19:17:30.346986       1 main.go:250] Node ha-058855-m02 has CIDR [10.244.1.0/24] 
	I0429 19:17:30.347113       1 main.go:223] Handling node with IPs: map[192.168.39.119:{}]
	I0429 19:17:30.347133       1 main.go:250] Node ha-058855-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [aa254f417bd8c51401396df387d06fb731904675af71223321fec1e881d2e3bc] <==
	I0429 19:11:58.370539       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0429 19:11:58.373829       1 main.go:107] hostIP = 192.168.39.52
	podIP = 192.168.39.52
	I0429 19:11:58.374108       1 main.go:116] setting mtu 1500 for CNI 
	I0429 19:11:58.440085       1 main.go:146] kindnetd IP family: "ipv4"
	I0429 19:11:58.440144       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0429 19:12:08.676933       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0429 19:12:18.680550       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0429 19:12:19.934321       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0429 19:12:23.006275       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0429 19:12:26.009267       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [3b59ec3dc1e29a4c89fb2d40bf1cb3db18358c929912c01f77801025c117736f] <==
	I0429 19:12:40.654500       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0429 19:12:40.654563       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0429 19:12:40.824592       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 19:12:40.825243       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 19:12:40.845910       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 19:12:40.845968       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 19:12:40.846073       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 19:12:40.846190       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 19:12:40.854608       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 19:12:40.854875       1 aggregator.go:165] initial CRD sync complete...
	I0429 19:12:40.854899       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 19:12:40.854909       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 19:12:40.854916       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:12:40.886194       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 19:12:40.886247       1 policy_source.go:224] refreshing policies
	I0429 19:12:40.887301       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 19:12:40.904619       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:12:40.913109       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0429 19:12:40.928265       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.215]
	I0429 19:12:40.929742       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 19:12:40.964468       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0429 19:12:40.973274       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0429 19:12:41.642926       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0429 19:12:42.399574       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.215 192.168.39.27 192.168.39.52]
	W0429 19:12:52.404343       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.27 192.168.39.52]
	
	
	==> kube-apiserver [8f21f1cfa42f5dc7250d4b936ccac831fb3c1028e1832fef69bf664596a8c441] <==
	I0429 19:11:58.132927       1 options.go:221] external host was not specified, using 192.168.39.52
	I0429 19:11:58.134086       1 server.go:148] Version: v1.30.0
	I0429 19:11:58.134135       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:11:58.946383       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0429 19:11:58.947231       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0429 19:11:58.947360       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0429 19:11:58.947541       1 instance.go:299] Using reconciler: lease
	I0429 19:11:58.947306       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0429 19:12:18.942597       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0429 19:12:18.942596       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0429 19:12:18.948235       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0429 19:12:18.948484       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [0d3212de69ac372cf90c1735c062daa36d336d730750901cd5fb573b42df375e] <==
	I0429 19:11:59.383043       1 serving.go:380] Generated self-signed cert in-memory
	I0429 19:11:59.854921       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0429 19:11:59.855008       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:11:59.856659       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0429 19:11:59.856898       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 19:11:59.856921       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 19:11:59.856933       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0429 19:12:19.957270       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.52:8443/healthz\": dial tcp 192.168.39.52:8443: connect: connection refused"
	
	
	==> kube-controller-manager [31dcb7268514a41d84040496fb3f97dd604c39d860db3795b1f536f6388d6c11] <==
	E0429 19:15:34.612170       1 gc_controller.go:153] "Failed to get node" err="node \"ha-058855-m03\" not found" logger="pod-garbage-collector-controller" node="ha-058855-m03"
	E0429 19:15:34.612178       1 gc_controller.go:153] "Failed to get node" err="node \"ha-058855-m03\" not found" logger="pod-garbage-collector-controller" node="ha-058855-m03"
	E0429 19:15:34.612183       1 gc_controller.go:153] "Failed to get node" err="node \"ha-058855-m03\" not found" logger="pod-garbage-collector-controller" node="ha-058855-m03"
	E0429 19:15:34.612194       1 gc_controller.go:153] "Failed to get node" err="node \"ha-058855-m03\" not found" logger="pod-garbage-collector-controller" node="ha-058855-m03"
	I0429 19:15:48.618079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.671512ms"
	I0429 19:15:48.618341       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.66µs"
	E0429 19:15:54.612472       1 gc_controller.go:153] "Failed to get node" err="node \"ha-058855-m03\" not found" logger="pod-garbage-collector-controller" node="ha-058855-m03"
	E0429 19:15:54.612570       1 gc_controller.go:153] "Failed to get node" err="node \"ha-058855-m03\" not found" logger="pod-garbage-collector-controller" node="ha-058855-m03"
	E0429 19:15:54.612598       1 gc_controller.go:153] "Failed to get node" err="node \"ha-058855-m03\" not found" logger="pod-garbage-collector-controller" node="ha-058855-m03"
	E0429 19:15:54.612623       1 gc_controller.go:153] "Failed to get node" err="node \"ha-058855-m03\" not found" logger="pod-garbage-collector-controller" node="ha-058855-m03"
	E0429 19:15:54.612647       1 gc_controller.go:153] "Failed to get node" err="node \"ha-058855-m03\" not found" logger="pod-garbage-collector-controller" node="ha-058855-m03"
	I0429 19:15:54.624352       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-058855-m03"
	I0429 19:15:54.656936       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-058855-m03"
	I0429 19:15:54.657034       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-m4fgv"
	I0429 19:15:54.687533       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-m4fgv"
	I0429 19:15:54.687580       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-058855-m03"
	I0429 19:15:54.720197       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-058855-m03"
	I0429 19:15:54.720251       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-29svc"
	I0429 19:15:54.751344       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-29svc"
	I0429 19:15:54.752576       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-058855-m03"
	I0429 19:15:54.782515       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-058855-m03"
	I0429 19:15:54.783001       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-058855-m03"
	I0429 19:15:54.817080       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-058855-m03"
	I0429 19:15:54.817176       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-058855-m03"
	I0429 19:15:54.844309       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-058855-m03"
	
	
	==> kube-proxy [2e3b2e1683b77eb5e433f57a31f9a25eaccc9713a2083f2538e904b657230ac5] <==
	E0429 19:09:02.559525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:05.631064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:05.631143       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:05.631219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:05.631265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:05.631451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:05.631504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:09.856229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:09.856269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:12.930544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:12.930639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:12.930749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:12.930858       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:22.144711       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:22.144955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:25.216026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:25.216097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:25.216292       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:25.216346       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:43.647034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:43.647716       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1917": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:52.864464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:52.864734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1920": dial tcp 192.168.39.254:8443: connect: no route to host
	W0429 19:09:52.864607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	E0429 19:09:52.864899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-058855&resourceVersion=1943": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [3234c6a2a02115d1a2b3c8db09477d14fa780e263e04d16a673863bdef318b03] <==
	I0429 19:11:59.708697       1 server_linux.go:69] "Using iptables proxy"
	E0429 19:12:01.887290       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-058855\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 19:12:04.958370       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-058855\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 19:12:08.031520       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-058855\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 19:12:14.176187       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-058855\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0429 19:12:23.391453       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-058855\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0429 19:12:41.425661       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.52"]
	I0429 19:12:41.508235       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:12:41.508332       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:12:41.508353       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:12:41.511642       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:12:41.512052       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:12:41.512100       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:12:41.514289       1 config.go:192] "Starting service config controller"
	I0429 19:12:41.514348       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:12:41.514414       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:12:41.514450       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:12:41.515318       1 config.go:319] "Starting node config controller"
	I0429 19:12:41.515362       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:12:41.614914       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:12:41.615037       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:12:41.615905       1 shared_informer.go:320] Caches are synced for node config
	W0429 19:16:06.380544       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0429 19:16:06.380544       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0429 19:16:06.382326       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [02cf56519f638778caaaa8342593494ae6cecd78d3a8f6122ae98be89f810dae] <==
	W0429 19:12:30.611206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.52:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:30.611313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.52:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:33.894563       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.52:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:33.894698       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.52:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:34.104204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.52:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:34.104302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.52:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:34.169432       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.52:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:34.169498       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.52:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:36.140567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.52:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:36.140616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.52:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:36.662102       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.52:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:36.662244       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.52:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:37.717059       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.52:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:37.717150       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.52:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:37.835575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.52:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:37.835663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.52:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:38.207302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.52:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	E0429 19:12:38.207669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.52:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.52:8443: connect: connection refused
	W0429 19:12:40.663650       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:12:40.663717       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:12:40.663887       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:12:40.663928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:12:40.664018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 19:12:40.664031       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 19:12:54.370750       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [3c1cf7e86cc05249d4be4ed07eecfb6755ae560c1843eb541f058bce3959e1ad] <==
	W0429 19:10:05.873309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 19:10:05.873469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 19:10:06.230848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:10:06.230984       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:10:06.371480       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 19:10:06.371535       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 19:10:06.406073       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 19:10:06.406142       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 19:10:06.431343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 19:10:06.431459       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 19:10:06.538567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 19:10:06.538899       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 19:10:07.093946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 19:10:07.094036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 19:10:07.178140       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 19:10:07.178200       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 19:10:07.391982       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 19:10:07.392186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 19:10:07.519987       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 19:10:07.520074       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 19:10:07.899992       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 19:10:07.900060       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 19:10:07.959224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:10:07.959348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:10:12.895846       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 29 19:13:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:13:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:13:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:13:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:13:34 ha-058855 kubelet[1376]: I0429 19:13:34.560378    1376 scope.go:117] "RemoveContainer" containerID="2ca11a172d18b7da9d7ad94a0a9eae78db44bfaec6ec0ce8cc6be0a5c4d6e791"
	Apr 29 19:14:29 ha-058855 kubelet[1376]: E0429 19:14:29.601924    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:14:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:14:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:14:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:14:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:15:29 ha-058855 kubelet[1376]: E0429 19:15:29.599469    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:15:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:15:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:15:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:15:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:16:29 ha-058855 kubelet[1376]: E0429 19:16:29.603104    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:16:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:16:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:16:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:16:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:17:29 ha-058855 kubelet[1376]: E0429 19:17:29.599751    1376 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:17:29 ha-058855 kubelet[1376]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:17:29 ha-058855 kubelet[1376]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:17:29 ha-058855 kubelet[1376]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:17:29 ha-058855 kubelet[1376]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 19:17:35.707130   39675 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18774-7754/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-058855 -n ha-058855
helpers_test.go:261: (dbg) Run:  kubectl --context ha-058855 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (310.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-773806
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-773806
E0429 19:32:48.915381   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-773806: exit status 82 (2m2.706649701s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-773806-m03"  ...
	* Stopping node "multinode-773806-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-773806" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-773806 --wait=true -v=8 --alsologtostderr
E0429 19:34:00.893949   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 19:35:51.959583   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-773806 --wait=true -v=8 --alsologtostderr: (3m4.907397047s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-773806
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-773806 -n multinode-773806
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-773806 logs -n 25: (1.691334457s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp multinode-773806-m02:/home/docker/cp-test.txt                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1658952582/001/cp-test_multinode-773806-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp multinode-773806-m02:/home/docker/cp-test.txt                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806:/home/docker/cp-test_multinode-773806-m02_multinode-773806.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n multinode-773806 sudo cat                                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /home/docker/cp-test_multinode-773806-m02_multinode-773806.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp multinode-773806-m02:/home/docker/cp-test.txt                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03:/home/docker/cp-test_multinode-773806-m02_multinode-773806-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n multinode-773806-m03 sudo cat                                   | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /home/docker/cp-test_multinode-773806-m02_multinode-773806-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp testdata/cp-test.txt                                                | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp multinode-773806-m03:/home/docker/cp-test.txt                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1658952582/001/cp-test_multinode-773806-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp multinode-773806-m03:/home/docker/cp-test.txt                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806:/home/docker/cp-test_multinode-773806-m03_multinode-773806.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n multinode-773806 sudo cat                                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /home/docker/cp-test_multinode-773806-m03_multinode-773806.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp multinode-773806-m03:/home/docker/cp-test.txt                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m02:/home/docker/cp-test_multinode-773806-m03_multinode-773806-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n multinode-773806-m02 sudo cat                                   | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /home/docker/cp-test_multinode-773806-m03_multinode-773806-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-773806 node stop m03                                                          | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	| node    | multinode-773806 node start                                                             | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:31 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-773806                                                                | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:31 UTC |                     |
	| stop    | -p multinode-773806                                                                     | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:31 UTC |                     |
	| start   | -p multinode-773806                                                                     | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:33 UTC | 29 Apr 24 19:36 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-773806                                                                | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:36 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 19:33:33
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 19:33:33.991835   49175 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:33:33.991967   49175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:33:33.991979   49175 out.go:304] Setting ErrFile to fd 2...
	I0429 19:33:33.991986   49175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:33:33.992183   49175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:33:33.992796   49175 out.go:298] Setting JSON to false
	I0429 19:33:33.993823   49175 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4512,"bootTime":1714414702,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 19:33:33.993885   49175 start.go:139] virtualization: kvm guest
	I0429 19:33:33.996516   49175 out.go:177] * [multinode-773806] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 19:33:33.998436   49175 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:33:33.999986   49175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:33:33.998372   49175 notify.go:220] Checking for updates...
	I0429 19:33:34.001887   49175 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:33:34.003625   49175 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:33:34.005188   49175 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 19:33:34.006717   49175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:33:34.008541   49175 config.go:182] Loaded profile config "multinode-773806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:33:34.008659   49175 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:33:34.009250   49175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:33:34.009304   49175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:33:34.024374   49175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39369
	I0429 19:33:34.024873   49175 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:33:34.025396   49175 main.go:141] libmachine: Using API Version  1
	I0429 19:33:34.025420   49175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:33:34.025797   49175 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:33:34.026004   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:33:34.063449   49175 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 19:33:34.064620   49175 start.go:297] selected driver: kvm2
	I0429 19:33:34.064638   49175 start.go:901] validating driver "kvm2" against &{Name:multinode-773806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-773806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:33:34.064776   49175 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:33:34.065110   49175 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:33:34.065176   49175 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 19:33:34.080178   49175 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 19:33:34.080798   49175 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:33:34.080849   49175 cni.go:84] Creating CNI manager for ""
	I0429 19:33:34.080859   49175 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 19:33:34.080923   49175 start.go:340] cluster config:
	{Name:multinode-773806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-773806 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:33:34.081040   49175 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:33:34.083663   49175 out.go:177] * Starting "multinode-773806" primary control-plane node in "multinode-773806" cluster
	I0429 19:33:34.084954   49175 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:33:34.084991   49175 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 19:33:34.084998   49175 cache.go:56] Caching tarball of preloaded images
	I0429 19:33:34.085083   49175 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 19:33:34.085095   49175 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 19:33:34.085210   49175 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/config.json ...
	I0429 19:33:34.085383   49175 start.go:360] acquireMachinesLock for multinode-773806: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:33:34.085423   49175 start.go:364] duration metric: took 23.863µs to acquireMachinesLock for "multinode-773806"
	I0429 19:33:34.085444   49175 start.go:96] Skipping create...Using existing machine configuration
	I0429 19:33:34.085452   49175 fix.go:54] fixHost starting: 
	I0429 19:33:34.085710   49175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:33:34.085743   49175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:33:34.100203   49175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0429 19:33:34.100662   49175 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:33:34.101111   49175 main.go:141] libmachine: Using API Version  1
	I0429 19:33:34.101133   49175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:33:34.101443   49175 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:33:34.101625   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:33:34.101798   49175 main.go:141] libmachine: (multinode-773806) Calling .GetState
	I0429 19:33:34.103423   49175 fix.go:112] recreateIfNeeded on multinode-773806: state=Running err=<nil>
	W0429 19:33:34.103457   49175 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 19:33:34.106239   49175 out.go:177] * Updating the running kvm2 "multinode-773806" VM ...
	I0429 19:33:34.107546   49175 machine.go:94] provisionDockerMachine start ...
	I0429 19:33:34.107570   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:33:34.107797   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:33:34.110454   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.110852   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.110881   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.111027   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:33:34.111204   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.111385   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.111508   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:33:34.111668   49175 main.go:141] libmachine: Using SSH client type: native
	I0429 19:33:34.111898   49175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0429 19:33:34.111910   49175 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:33:34.232249   49175 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773806
	
	I0429 19:33:34.232280   49175 main.go:141] libmachine: (multinode-773806) Calling .GetMachineName
	I0429 19:33:34.232526   49175 buildroot.go:166] provisioning hostname "multinode-773806"
	I0429 19:33:34.232552   49175 main.go:141] libmachine: (multinode-773806) Calling .GetMachineName
	I0429 19:33:34.232761   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:33:34.235460   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.235889   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.235918   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.236090   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:33:34.236344   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.236493   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.236636   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:33:34.236771   49175 main.go:141] libmachine: Using SSH client type: native
	I0429 19:33:34.236939   49175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0429 19:33:34.236951   49175 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-773806 && echo "multinode-773806" | sudo tee /etc/hostname
	I0429 19:33:34.373214   49175 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773806
	
	I0429 19:33:34.373250   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:33:34.376042   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.376411   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.376461   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.376623   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:33:34.376780   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.376938   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.377064   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:33:34.377217   49175 main.go:141] libmachine: Using SSH client type: native
	I0429 19:33:34.377375   49175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0429 19:33:34.377390   49175 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-773806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773806/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-773806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:33:34.495374   49175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:33:34.495403   49175 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:33:34.495434   49175 buildroot.go:174] setting up certificates
	I0429 19:33:34.495456   49175 provision.go:84] configureAuth start
	I0429 19:33:34.495474   49175 main.go:141] libmachine: (multinode-773806) Calling .GetMachineName
	I0429 19:33:34.495731   49175 main.go:141] libmachine: (multinode-773806) Calling .GetIP
	I0429 19:33:34.498078   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.498431   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.498458   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.498590   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:33:34.500941   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.501308   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.501331   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.501449   49175 provision.go:143] copyHostCerts
	I0429 19:33:34.501479   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:33:34.501528   49175 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:33:34.501541   49175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:33:34.501618   49175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:33:34.501688   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:33:34.501708   49175 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:33:34.501715   49175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:33:34.501740   49175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:33:34.501821   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:33:34.501845   49175 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:33:34.501852   49175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:33:34.501873   49175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:33:34.501913   49175 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.multinode-773806 san=[127.0.0.1 192.168.39.127 localhost minikube multinode-773806]
	I0429 19:33:34.557143   49175 provision.go:177] copyRemoteCerts
	I0429 19:33:34.557188   49175 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:33:34.557207   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:33:34.559456   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.559782   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.559807   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.559954   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:33:34.560125   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.560282   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:33:34.560423   49175 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/multinode-773806/id_rsa Username:docker}
	I0429 19:33:34.653008   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 19:33:34.653080   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:33:34.683856   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 19:33:34.683961   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 19:33:34.713972   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 19:33:34.714040   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:33:34.744063   49175 provision.go:87] duration metric: took 248.589098ms to configureAuth
	I0429 19:33:34.744096   49175 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:33:34.744363   49175 config.go:182] Loaded profile config "multinode-773806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:33:34.744434   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:33:34.747207   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.747598   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.747629   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.747797   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:33:34.748006   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.748275   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.748419   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:33:34.748571   49175 main.go:141] libmachine: Using SSH client type: native
	I0429 19:33:34.748796   49175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0429 19:33:34.748814   49175 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:35:05.540872   49175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:35:05.540905   49175 machine.go:97] duration metric: took 1m31.433345092s to provisionDockerMachine
	I0429 19:35:05.540921   49175 start.go:293] postStartSetup for "multinode-773806" (driver="kvm2")
	I0429 19:35:05.540937   49175 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:35:05.540963   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:35:05.541284   49175 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:35:05.541344   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:35:05.544538   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.544994   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:35:05.545015   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.545153   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:35:05.545350   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:35:05.545514   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:35:05.545644   49175 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/multinode-773806/id_rsa Username:docker}
	I0429 19:35:05.635981   49175 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:35:05.641067   49175 command_runner.go:130] > NAME=Buildroot
	I0429 19:35:05.641089   49175 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 19:35:05.641093   49175 command_runner.go:130] > ID=buildroot
	I0429 19:35:05.641098   49175 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 19:35:05.641107   49175 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 19:35:05.641133   49175 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:35:05.641143   49175 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:35:05.641201   49175 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:35:05.641280   49175 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:35:05.641289   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /etc/ssl/certs/151242.pem
	I0429 19:35:05.641362   49175 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:35:05.653065   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:35:05.680006   49175 start.go:296] duration metric: took 139.070949ms for postStartSetup
	I0429 19:35:05.680049   49175 fix.go:56] duration metric: took 1m31.594595333s for fixHost
	I0429 19:35:05.680077   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:35:05.683392   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.683853   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:35:05.683885   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.684078   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:35:05.684255   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:35:05.684452   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:35:05.684618   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:35:05.684800   49175 main.go:141] libmachine: Using SSH client type: native
	I0429 19:35:05.684979   49175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0429 19:35:05.684991   49175 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:35:05.803708   49175 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714419305.777775377
	
	I0429 19:35:05.803735   49175 fix.go:216] guest clock: 1714419305.777775377
	I0429 19:35:05.803745   49175 fix.go:229] Guest: 2024-04-29 19:35:05.777775377 +0000 UTC Remote: 2024-04-29 19:35:05.680055131 +0000 UTC m=+91.742029303 (delta=97.720246ms)
	I0429 19:35:05.803765   49175 fix.go:200] guest clock delta is within tolerance: 97.720246ms
	I0429 19:35:05.803771   49175 start.go:83] releasing machines lock for "multinode-773806", held for 1m31.718338271s
	I0429 19:35:05.803793   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:35:05.804162   49175 main.go:141] libmachine: (multinode-773806) Calling .GetIP
	I0429 19:35:05.806837   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.807209   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:35:05.807231   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.807430   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:35:05.807936   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:35:05.808113   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:35:05.808224   49175 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:35:05.808263   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:35:05.808389   49175 ssh_runner.go:195] Run: cat /version.json
	I0429 19:35:05.808414   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:35:05.811145   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.811223   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.811643   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:35:05.811693   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.811723   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:35:05.811746   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.811884   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:35:05.811962   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:35:05.812052   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:35:05.812112   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:35:05.812171   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:35:05.812314   49175 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/multinode-773806/id_rsa Username:docker}
	I0429 19:35:05.812353   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:35:05.812517   49175 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/multinode-773806/id_rsa Username:docker}
	I0429 19:35:05.917360   49175 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 19:35:05.918216   49175 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 19:35:05.918365   49175 ssh_runner.go:195] Run: systemctl --version
	I0429 19:35:05.925385   49175 command_runner.go:130] > systemd 252 (252)
	I0429 19:35:05.925428   49175 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 19:35:05.925479   49175 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:35:06.100476   49175 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 19:35:06.107367   49175 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 19:35:06.107595   49175 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:35:06.107665   49175 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:35:06.118538   49175 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 19:35:06.118561   49175 start.go:494] detecting cgroup driver to use...
	I0429 19:35:06.118615   49175 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:35:06.136934   49175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:35:06.151779   49175 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:35:06.151897   49175 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:35:06.167406   49175 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:35:06.182732   49175 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:35:06.330224   49175 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:35:06.475889   49175 docker.go:233] disabling docker service ...
	I0429 19:35:06.475963   49175 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:35:06.495296   49175 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:35:06.510338   49175 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:35:06.661248   49175 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:35:06.812462   49175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:35:06.829819   49175 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:35:06.850861   49175 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0429 19:35:06.850912   49175 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 19:35:06.850961   49175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.862802   49175 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:35:06.862857   49175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.874226   49175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.886079   49175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.897136   49175 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:35:06.909093   49175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.942885   49175 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.956116   49175 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.967739   49175 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:35:06.978275   49175 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 19:35:06.978345   49175 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:35:06.988613   49175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:35:07.135687   49175 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:35:07.402286   49175 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:35:07.402355   49175 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:35:07.408902   49175 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0429 19:35:07.408923   49175 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 19:35:07.408930   49175 command_runner.go:130] > Device: 0,22	Inode: 1329        Links: 1
	I0429 19:35:07.408937   49175 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 19:35:07.408942   49175 command_runner.go:130] > Access: 2024-04-29 19:35:07.259778624 +0000
	I0429 19:35:07.408948   49175 command_runner.go:130] > Modify: 2024-04-29 19:35:07.259778624 +0000
	I0429 19:35:07.408953   49175 command_runner.go:130] > Change: 2024-04-29 19:35:07.259778624 +0000
	I0429 19:35:07.408957   49175 command_runner.go:130] >  Birth: -
	I0429 19:35:07.409048   49175 start.go:562] Will wait 60s for crictl version
	I0429 19:35:07.409112   49175 ssh_runner.go:195] Run: which crictl
	I0429 19:35:07.413559   49175 command_runner.go:130] > /usr/bin/crictl
	I0429 19:35:07.413632   49175 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:35:07.459073   49175 command_runner.go:130] > Version:  0.1.0
	I0429 19:35:07.459096   49175 command_runner.go:130] > RuntimeName:  cri-o
	I0429 19:35:07.459101   49175 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0429 19:35:07.459105   49175 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 19:35:07.460725   49175 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:35:07.460812   49175 ssh_runner.go:195] Run: crio --version
	I0429 19:35:07.491928   49175 command_runner.go:130] > crio version 1.29.1
	I0429 19:35:07.491952   49175 command_runner.go:130] > Version:        1.29.1
	I0429 19:35:07.491958   49175 command_runner.go:130] > GitCommit:      unknown
	I0429 19:35:07.491962   49175 command_runner.go:130] > GitCommitDate:  unknown
	I0429 19:35:07.491985   49175 command_runner.go:130] > GitTreeState:   clean
	I0429 19:35:07.491991   49175 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0429 19:35:07.491995   49175 command_runner.go:130] > GoVersion:      go1.21.6
	I0429 19:35:07.492000   49175 command_runner.go:130] > Compiler:       gc
	I0429 19:35:07.492004   49175 command_runner.go:130] > Platform:       linux/amd64
	I0429 19:35:07.492008   49175 command_runner.go:130] > Linkmode:       dynamic
	I0429 19:35:07.492012   49175 command_runner.go:130] > BuildTags:      
	I0429 19:35:07.492022   49175 command_runner.go:130] >   containers_image_ostree_stub
	I0429 19:35:07.492026   49175 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0429 19:35:07.492030   49175 command_runner.go:130] >   btrfs_noversion
	I0429 19:35:07.492034   49175 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0429 19:35:07.492038   49175 command_runner.go:130] >   libdm_no_deferred_remove
	I0429 19:35:07.492041   49175 command_runner.go:130] >   seccomp
	I0429 19:35:07.492047   49175 command_runner.go:130] > LDFlags:          unknown
	I0429 19:35:07.492053   49175 command_runner.go:130] > SeccompEnabled:   true
	I0429 19:35:07.492057   49175 command_runner.go:130] > AppArmorEnabled:  false
	I0429 19:35:07.493444   49175 ssh_runner.go:195] Run: crio --version
	I0429 19:35:07.528960   49175 command_runner.go:130] > crio version 1.29.1
	I0429 19:35:07.528994   49175 command_runner.go:130] > Version:        1.29.1
	I0429 19:35:07.529002   49175 command_runner.go:130] > GitCommit:      unknown
	I0429 19:35:07.529009   49175 command_runner.go:130] > GitCommitDate:  unknown
	I0429 19:35:07.529015   49175 command_runner.go:130] > GitTreeState:   clean
	I0429 19:35:07.529024   49175 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0429 19:35:07.529030   49175 command_runner.go:130] > GoVersion:      go1.21.6
	I0429 19:35:07.529037   49175 command_runner.go:130] > Compiler:       gc
	I0429 19:35:07.529043   49175 command_runner.go:130] > Platform:       linux/amd64
	I0429 19:35:07.529050   49175 command_runner.go:130] > Linkmode:       dynamic
	I0429 19:35:07.529058   49175 command_runner.go:130] > BuildTags:      
	I0429 19:35:07.529063   49175 command_runner.go:130] >   containers_image_ostree_stub
	I0429 19:35:07.529068   49175 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0429 19:35:07.529072   49175 command_runner.go:130] >   btrfs_noversion
	I0429 19:35:07.529079   49175 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0429 19:35:07.529084   49175 command_runner.go:130] >   libdm_no_deferred_remove
	I0429 19:35:07.529088   49175 command_runner.go:130] >   seccomp
	I0429 19:35:07.529093   49175 command_runner.go:130] > LDFlags:          unknown
	I0429 19:35:07.529108   49175 command_runner.go:130] > SeccompEnabled:   true
	I0429 19:35:07.529122   49175 command_runner.go:130] > AppArmorEnabled:  false
	I0429 19:35:07.531686   49175 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 19:35:07.533484   49175 main.go:141] libmachine: (multinode-773806) Calling .GetIP
	I0429 19:35:07.536184   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:07.536594   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:35:07.536619   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:07.536797   49175 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 19:35:07.541798   49175 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0429 19:35:07.541954   49175 kubeadm.go:877] updating cluster {Name:multinode-773806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-773806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 19:35:07.542118   49175 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:35:07.542174   49175 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:35:07.592377   49175 command_runner.go:130] > {
	I0429 19:35:07.592406   49175 command_runner.go:130] >   "images": [
	I0429 19:35:07.592412   49175 command_runner.go:130] >     {
	I0429 19:35:07.592422   49175 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0429 19:35:07.592429   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.592436   49175 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0429 19:35:07.592441   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592446   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.592457   49175 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0429 19:35:07.592467   49175 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0429 19:35:07.592473   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592480   49175 command_runner.go:130] >       "size": "65291810",
	I0429 19:35:07.592487   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.592497   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.592510   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.592520   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.592526   49175 command_runner.go:130] >     },
	I0429 19:35:07.592532   49175 command_runner.go:130] >     {
	I0429 19:35:07.592546   49175 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0429 19:35:07.592556   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.592566   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0429 19:35:07.592575   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592582   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.592598   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0429 19:35:07.592614   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0429 19:35:07.592623   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592631   49175 command_runner.go:130] >       "size": "1363676",
	I0429 19:35:07.592640   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.592652   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.592662   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.592670   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.592680   49175 command_runner.go:130] >     },
	I0429 19:35:07.592685   49175 command_runner.go:130] >     {
	I0429 19:35:07.592697   49175 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0429 19:35:07.592707   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.592716   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0429 19:35:07.592732   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592743   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.592756   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0429 19:35:07.592773   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0429 19:35:07.592792   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592801   49175 command_runner.go:130] >       "size": "31470524",
	I0429 19:35:07.592809   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.592819   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.592827   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.592836   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.592843   49175 command_runner.go:130] >     },
	I0429 19:35:07.592851   49175 command_runner.go:130] >     {
	I0429 19:35:07.592862   49175 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0429 19:35:07.592872   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.592883   49175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0429 19:35:07.592891   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592898   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.592913   49175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0429 19:35:07.592937   49175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0429 19:35:07.592949   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592956   49175 command_runner.go:130] >       "size": "61245718",
	I0429 19:35:07.592962   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.592970   49175 command_runner.go:130] >       "username": "nonroot",
	I0429 19:35:07.592980   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.592988   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.592996   49175 command_runner.go:130] >     },
	I0429 19:35:07.593003   49175 command_runner.go:130] >     {
	I0429 19:35:07.593014   49175 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0429 19:35:07.593024   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.593034   49175 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0429 19:35:07.593042   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593049   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.593065   49175 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0429 19:35:07.593079   49175 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0429 19:35:07.593088   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593096   49175 command_runner.go:130] >       "size": "150779692",
	I0429 19:35:07.593111   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.593122   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.593130   49175 command_runner.go:130] >       },
	I0429 19:35:07.593137   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.593145   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.593155   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.593161   49175 command_runner.go:130] >     },
	I0429 19:35:07.593171   49175 command_runner.go:130] >     {
	I0429 19:35:07.593183   49175 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0429 19:35:07.593193   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.593203   49175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0429 19:35:07.593211   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593219   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.593234   49175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0429 19:35:07.593249   49175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0429 19:35:07.593259   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593267   49175 command_runner.go:130] >       "size": "117609952",
	I0429 19:35:07.593277   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.593283   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.593288   49175 command_runner.go:130] >       },
	I0429 19:35:07.593296   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.593303   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.593387   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.593405   49175 command_runner.go:130] >     },
	I0429 19:35:07.593411   49175 command_runner.go:130] >     {
	I0429 19:35:07.593422   49175 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0429 19:35:07.593433   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.593443   49175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0429 19:35:07.593452   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593460   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.593478   49175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0429 19:35:07.593494   49175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0429 19:35:07.593503   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593511   49175 command_runner.go:130] >       "size": "112170310",
	I0429 19:35:07.593520   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.593527   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.593557   49175 command_runner.go:130] >       },
	I0429 19:35:07.593567   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.593574   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.593594   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.593604   49175 command_runner.go:130] >     },
	I0429 19:35:07.593611   49175 command_runner.go:130] >     {
	I0429 19:35:07.593624   49175 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0429 19:35:07.593634   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.593648   49175 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0429 19:35:07.593656   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593663   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.593695   49175 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0429 19:35:07.593711   49175 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0429 19:35:07.593721   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593728   49175 command_runner.go:130] >       "size": "85932953",
	I0429 19:35:07.593737   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.593745   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.593755   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.593764   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.593769   49175 command_runner.go:130] >     },
	I0429 19:35:07.593774   49175 command_runner.go:130] >     {
	I0429 19:35:07.593783   49175 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0429 19:35:07.593792   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.593801   49175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0429 19:35:07.593806   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593813   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.593825   49175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0429 19:35:07.593842   49175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0429 19:35:07.593851   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593860   49175 command_runner.go:130] >       "size": "63026502",
	I0429 19:35:07.593870   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.593879   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.593886   49175 command_runner.go:130] >       },
	I0429 19:35:07.593896   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.593916   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.593927   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.593941   49175 command_runner.go:130] >     },
	I0429 19:35:07.593951   49175 command_runner.go:130] >     {
	I0429 19:35:07.593963   49175 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0429 19:35:07.593972   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.593981   49175 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0429 19:35:07.593990   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593998   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.594013   49175 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0429 19:35:07.594028   49175 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0429 19:35:07.594037   49175 command_runner.go:130] >       ],
	I0429 19:35:07.594045   49175 command_runner.go:130] >       "size": "750414",
	I0429 19:35:07.594054   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.594062   49175 command_runner.go:130] >         "value": "65535"
	I0429 19:35:07.594083   49175 command_runner.go:130] >       },
	I0429 19:35:07.594090   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.594100   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.594108   49175 command_runner.go:130] >       "pinned": true
	I0429 19:35:07.594116   49175 command_runner.go:130] >     }
	I0429 19:35:07.594122   49175 command_runner.go:130] >   ]
	I0429 19:35:07.594127   49175 command_runner.go:130] > }
	I0429 19:35:07.594323   49175 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 19:35:07.594337   49175 crio.go:433] Images already preloaded, skipping extraction
	I0429 19:35:07.594394   49175 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:35:07.633745   49175 command_runner.go:130] > {
	I0429 19:35:07.633768   49175 command_runner.go:130] >   "images": [
	I0429 19:35:07.633773   49175 command_runner.go:130] >     {
	I0429 19:35:07.633784   49175 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0429 19:35:07.633791   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.633799   49175 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0429 19:35:07.633804   49175 command_runner.go:130] >       ],
	I0429 19:35:07.633810   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.633822   49175 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0429 19:35:07.633832   49175 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0429 19:35:07.633838   49175 command_runner.go:130] >       ],
	I0429 19:35:07.633845   49175 command_runner.go:130] >       "size": "65291810",
	I0429 19:35:07.633856   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.633864   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.633892   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.633903   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.633911   49175 command_runner.go:130] >     },
	I0429 19:35:07.633916   49175 command_runner.go:130] >     {
	I0429 19:35:07.633928   49175 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0429 19:35:07.633938   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.633949   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0429 19:35:07.633957   49175 command_runner.go:130] >       ],
	I0429 19:35:07.633965   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.633981   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0429 19:35:07.633996   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0429 19:35:07.634005   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634019   49175 command_runner.go:130] >       "size": "1363676",
	I0429 19:35:07.634028   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.634041   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.634050   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.634057   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.634076   49175 command_runner.go:130] >     },
	I0429 19:35:07.634082   49175 command_runner.go:130] >     {
	I0429 19:35:07.634094   49175 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0429 19:35:07.634104   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.634114   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0429 19:35:07.634123   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634129   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.634147   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0429 19:35:07.634163   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0429 19:35:07.634172   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634179   49175 command_runner.go:130] >       "size": "31470524",
	I0429 19:35:07.634190   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.634198   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.634206   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.634217   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.634225   49175 command_runner.go:130] >     },
	I0429 19:35:07.634231   49175 command_runner.go:130] >     {
	I0429 19:35:07.634245   49175 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0429 19:35:07.634256   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.634266   49175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0429 19:35:07.634274   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634280   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.634296   49175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0429 19:35:07.634327   49175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0429 19:35:07.634336   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634343   49175 command_runner.go:130] >       "size": "61245718",
	I0429 19:35:07.634349   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.634360   49175 command_runner.go:130] >       "username": "nonroot",
	I0429 19:35:07.634371   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.634379   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.634388   49175 command_runner.go:130] >     },
	I0429 19:35:07.634404   49175 command_runner.go:130] >     {
	I0429 19:35:07.634418   49175 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0429 19:35:07.634428   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.634439   49175 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0429 19:35:07.634447   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634454   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.634466   49175 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0429 19:35:07.634481   49175 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0429 19:35:07.634490   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634497   49175 command_runner.go:130] >       "size": "150779692",
	I0429 19:35:07.634506   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.634513   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.634522   49175 command_runner.go:130] >       },
	I0429 19:35:07.634530   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.634539   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.634546   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.634554   49175 command_runner.go:130] >     },
	I0429 19:35:07.634560   49175 command_runner.go:130] >     {
	I0429 19:35:07.634571   49175 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0429 19:35:07.634581   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.634593   49175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0429 19:35:07.634600   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634608   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.634624   49175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0429 19:35:07.634639   49175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0429 19:35:07.634647   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634655   49175 command_runner.go:130] >       "size": "117609952",
	I0429 19:35:07.634665   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.634673   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.634679   49175 command_runner.go:130] >       },
	I0429 19:35:07.634688   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.634698   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.634706   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.634719   49175 command_runner.go:130] >     },
	I0429 19:35:07.634728   49175 command_runner.go:130] >     {
	I0429 19:35:07.634739   49175 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0429 19:35:07.634755   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.634768   49175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0429 19:35:07.634777   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634784   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.634800   49175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0429 19:35:07.634816   49175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0429 19:35:07.634829   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634840   49175 command_runner.go:130] >       "size": "112170310",
	I0429 19:35:07.634847   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.634854   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.634861   49175 command_runner.go:130] >       },
	I0429 19:35:07.634869   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.634875   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.634882   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.634888   49175 command_runner.go:130] >     },
	I0429 19:35:07.634895   49175 command_runner.go:130] >     {
	I0429 19:35:07.634907   49175 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0429 19:35:07.634917   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.634926   49175 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0429 19:35:07.634935   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634943   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.634975   49175 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0429 19:35:07.634991   49175 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0429 19:35:07.634998   49175 command_runner.go:130] >       ],
	I0429 19:35:07.635008   49175 command_runner.go:130] >       "size": "85932953",
	I0429 19:35:07.635016   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.635026   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.635034   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.635043   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.635049   49175 command_runner.go:130] >     },
	I0429 19:35:07.635055   49175 command_runner.go:130] >     {
	I0429 19:35:07.635066   49175 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0429 19:35:07.635076   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.635085   49175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0429 19:35:07.635093   49175 command_runner.go:130] >       ],
	I0429 19:35:07.635101   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.635123   49175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0429 19:35:07.635139   49175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0429 19:35:07.635163   49175 command_runner.go:130] >       ],
	I0429 19:35:07.635173   49175 command_runner.go:130] >       "size": "63026502",
	I0429 19:35:07.635179   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.635184   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.635190   49175 command_runner.go:130] >       },
	I0429 19:35:07.635198   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.635207   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.635214   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.635223   49175 command_runner.go:130] >     },
	I0429 19:35:07.635229   49175 command_runner.go:130] >     {
	I0429 19:35:07.635242   49175 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0429 19:35:07.635251   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.635260   49175 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0429 19:35:07.635269   49175 command_runner.go:130] >       ],
	I0429 19:35:07.635276   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.635292   49175 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0429 19:35:07.635315   49175 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0429 19:35:07.635324   49175 command_runner.go:130] >       ],
	I0429 19:35:07.635333   49175 command_runner.go:130] >       "size": "750414",
	I0429 19:35:07.635341   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.635349   49175 command_runner.go:130] >         "value": "65535"
	I0429 19:35:07.635358   49175 command_runner.go:130] >       },
	I0429 19:35:07.635365   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.635374   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.635382   49175 command_runner.go:130] >       "pinned": true
	I0429 19:35:07.635390   49175 command_runner.go:130] >     }
	I0429 19:35:07.635395   49175 command_runner.go:130] >   ]
	I0429 19:35:07.635400   49175 command_runner.go:130] > }
	I0429 19:35:07.635544   49175 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 19:35:07.635558   49175 cache_images.go:84] Images are preloaded, skipping loading
	I0429 19:35:07.635568   49175 kubeadm.go:928] updating node { 192.168.39.127 8443 v1.30.0 crio true true} ...
	I0429 19:35:07.635709   49175 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-773806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-773806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:35:07.635790   49175 ssh_runner.go:195] Run: crio config
	I0429 19:35:07.675183   49175 command_runner.go:130] ! time="2024-04-29 19:35:07.649353080Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0429 19:35:07.682393   49175 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0429 19:35:07.689141   49175 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0429 19:35:07.689163   49175 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0429 19:35:07.689170   49175 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0429 19:35:07.689173   49175 command_runner.go:130] > #
	I0429 19:35:07.689179   49175 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0429 19:35:07.689185   49175 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0429 19:35:07.689191   49175 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0429 19:35:07.689199   49175 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0429 19:35:07.689202   49175 command_runner.go:130] > # reload'.
	I0429 19:35:07.689208   49175 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0429 19:35:07.689214   49175 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0429 19:35:07.689221   49175 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0429 19:35:07.689236   49175 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0429 19:35:07.689243   49175 command_runner.go:130] > [crio]
	I0429 19:35:07.689249   49175 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0429 19:35:07.689257   49175 command_runner.go:130] > # containers images, in this directory.
	I0429 19:35:07.689262   49175 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0429 19:35:07.689272   49175 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0429 19:35:07.689289   49175 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0429 19:35:07.689297   49175 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0429 19:35:07.689301   49175 command_runner.go:130] > # imagestore = ""
	I0429 19:35:07.689308   49175 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0429 19:35:07.689314   49175 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0429 19:35:07.689321   49175 command_runner.go:130] > storage_driver = "overlay"
	I0429 19:35:07.689326   49175 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0429 19:35:07.689333   49175 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0429 19:35:07.689337   49175 command_runner.go:130] > storage_option = [
	I0429 19:35:07.689344   49175 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0429 19:35:07.689347   49175 command_runner.go:130] > ]
	I0429 19:35:07.689356   49175 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0429 19:35:07.689364   49175 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0429 19:35:07.689369   49175 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0429 19:35:07.689374   49175 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0429 19:35:07.689382   49175 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0429 19:35:07.689389   49175 command_runner.go:130] > # always happen on a node reboot
	I0429 19:35:07.689394   49175 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0429 19:35:07.689407   49175 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0429 19:35:07.689416   49175 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0429 19:35:07.689421   49175 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0429 19:35:07.689428   49175 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0429 19:35:07.689435   49175 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0429 19:35:07.689445   49175 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0429 19:35:07.689451   49175 command_runner.go:130] > # internal_wipe = true
	I0429 19:35:07.689464   49175 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0429 19:35:07.689472   49175 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0429 19:35:07.689478   49175 command_runner.go:130] > # internal_repair = false
	I0429 19:35:07.689483   49175 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0429 19:35:07.689491   49175 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0429 19:35:07.689503   49175 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0429 19:35:07.689511   49175 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0429 19:35:07.689519   49175 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0429 19:35:07.689526   49175 command_runner.go:130] > [crio.api]
	I0429 19:35:07.689530   49175 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0429 19:35:07.689537   49175 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0429 19:35:07.689542   49175 command_runner.go:130] > # IP address on which the stream server will listen.
	I0429 19:35:07.689549   49175 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0429 19:35:07.689555   49175 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0429 19:35:07.689562   49175 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0429 19:35:07.689566   49175 command_runner.go:130] > # stream_port = "0"
	I0429 19:35:07.689578   49175 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0429 19:35:07.689585   49175 command_runner.go:130] > # stream_enable_tls = false
	I0429 19:35:07.689591   49175 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0429 19:35:07.689598   49175 command_runner.go:130] > # stream_idle_timeout = ""
	I0429 19:35:07.689603   49175 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0429 19:35:07.689611   49175 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0429 19:35:07.689615   49175 command_runner.go:130] > # minutes.
	I0429 19:35:07.689621   49175 command_runner.go:130] > # stream_tls_cert = ""
	I0429 19:35:07.689627   49175 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0429 19:35:07.689641   49175 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0429 19:35:07.689647   49175 command_runner.go:130] > # stream_tls_key = ""
	I0429 19:35:07.689653   49175 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0429 19:35:07.689659   49175 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0429 19:35:07.689676   49175 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0429 19:35:07.689689   49175 command_runner.go:130] > # stream_tls_ca = ""
	I0429 19:35:07.689696   49175 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0429 19:35:07.689700   49175 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0429 19:35:07.689708   49175 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0429 19:35:07.689715   49175 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0429 19:35:07.689722   49175 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0429 19:35:07.689729   49175 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0429 19:35:07.689733   49175 command_runner.go:130] > [crio.runtime]
	I0429 19:35:07.689741   49175 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0429 19:35:07.689749   49175 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0429 19:35:07.689755   49175 command_runner.go:130] > # "nofile=1024:2048"
	I0429 19:35:07.689772   49175 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0429 19:35:07.689778   49175 command_runner.go:130] > # default_ulimits = [
	I0429 19:35:07.689781   49175 command_runner.go:130] > # ]
	I0429 19:35:07.689787   49175 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0429 19:35:07.689793   49175 command_runner.go:130] > # no_pivot = false
	I0429 19:35:07.689801   49175 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0429 19:35:07.689809   49175 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0429 19:35:07.689815   49175 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0429 19:35:07.689826   49175 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0429 19:35:07.689834   49175 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0429 19:35:07.689841   49175 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0429 19:35:07.689848   49175 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0429 19:35:07.689852   49175 command_runner.go:130] > # Cgroup setting for conmon
	I0429 19:35:07.689861   49175 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0429 19:35:07.689867   49175 command_runner.go:130] > conmon_cgroup = "pod"
	I0429 19:35:07.689873   49175 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0429 19:35:07.689880   49175 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0429 19:35:07.689887   49175 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0429 19:35:07.689893   49175 command_runner.go:130] > conmon_env = [
	I0429 19:35:07.689899   49175 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0429 19:35:07.689904   49175 command_runner.go:130] > ]
	I0429 19:35:07.689909   49175 command_runner.go:130] > # Additional environment variables to set for all the
	I0429 19:35:07.689916   49175 command_runner.go:130] > # containers. These are overridden if set in the
	I0429 19:35:07.689921   49175 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0429 19:35:07.689928   49175 command_runner.go:130] > # default_env = [
	I0429 19:35:07.689931   49175 command_runner.go:130] > # ]
	I0429 19:35:07.689939   49175 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0429 19:35:07.689946   49175 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0429 19:35:07.689952   49175 command_runner.go:130] > # selinux = false
	I0429 19:35:07.689958   49175 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0429 19:35:07.689966   49175 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0429 19:35:07.689974   49175 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0429 19:35:07.689980   49175 command_runner.go:130] > # seccomp_profile = ""
	I0429 19:35:07.689985   49175 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0429 19:35:07.689993   49175 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0429 19:35:07.689998   49175 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0429 19:35:07.690010   49175 command_runner.go:130] > # which might increase security.
	I0429 19:35:07.690017   49175 command_runner.go:130] > # This option is currently deprecated,
	I0429 19:35:07.690023   49175 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0429 19:35:07.690030   49175 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0429 19:35:07.690036   49175 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0429 19:35:07.690044   49175 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0429 19:35:07.690054   49175 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0429 19:35:07.690062   49175 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0429 19:35:07.690088   49175 command_runner.go:130] > # This option supports live configuration reload.
	I0429 19:35:07.690099   49175 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0429 19:35:07.690110   49175 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0429 19:35:07.690118   49175 command_runner.go:130] > # the cgroup blockio controller.
	I0429 19:35:07.690122   49175 command_runner.go:130] > # blockio_config_file = ""
	I0429 19:35:07.690131   49175 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0429 19:35:07.690139   49175 command_runner.go:130] > # blockio parameters.
	I0429 19:35:07.690143   49175 command_runner.go:130] > # blockio_reload = false
	I0429 19:35:07.690150   49175 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0429 19:35:07.690157   49175 command_runner.go:130] > # irqbalance daemon.
	I0429 19:35:07.690162   49175 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0429 19:35:07.690170   49175 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0429 19:35:07.690177   49175 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0429 19:35:07.690186   49175 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0429 19:35:07.690195   49175 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0429 19:35:07.690201   49175 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0429 19:35:07.690208   49175 command_runner.go:130] > # This option supports live configuration reload.
	I0429 19:35:07.690212   49175 command_runner.go:130] > # rdt_config_file = ""
	I0429 19:35:07.690220   49175 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0429 19:35:07.690224   49175 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0429 19:35:07.690278   49175 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0429 19:35:07.690289   49175 command_runner.go:130] > # separate_pull_cgroup = ""
	I0429 19:35:07.690295   49175 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0429 19:35:07.690301   49175 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0429 19:35:07.690307   49175 command_runner.go:130] > # will be added.
	I0429 19:35:07.690312   49175 command_runner.go:130] > # default_capabilities = [
	I0429 19:35:07.690318   49175 command_runner.go:130] > # 	"CHOWN",
	I0429 19:35:07.690322   49175 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0429 19:35:07.690331   49175 command_runner.go:130] > # 	"FSETID",
	I0429 19:35:07.690337   49175 command_runner.go:130] > # 	"FOWNER",
	I0429 19:35:07.690341   49175 command_runner.go:130] > # 	"SETGID",
	I0429 19:35:07.690347   49175 command_runner.go:130] > # 	"SETUID",
	I0429 19:35:07.690351   49175 command_runner.go:130] > # 	"SETPCAP",
	I0429 19:35:07.690357   49175 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0429 19:35:07.690360   49175 command_runner.go:130] > # 	"KILL",
	I0429 19:35:07.690366   49175 command_runner.go:130] > # ]
	I0429 19:35:07.690374   49175 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0429 19:35:07.690382   49175 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0429 19:35:07.690390   49175 command_runner.go:130] > # add_inheritable_capabilities = false
	I0429 19:35:07.690398   49175 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0429 19:35:07.690406   49175 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0429 19:35:07.690410   49175 command_runner.go:130] > default_sysctls = [
	I0429 19:35:07.690417   49175 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0429 19:35:07.690421   49175 command_runner.go:130] > ]
	I0429 19:35:07.690427   49175 command_runner.go:130] > # List of devices on the host that a
	I0429 19:35:07.690433   49175 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0429 19:35:07.690440   49175 command_runner.go:130] > # allowed_devices = [
	I0429 19:35:07.690444   49175 command_runner.go:130] > # 	"/dev/fuse",
	I0429 19:35:07.690450   49175 command_runner.go:130] > # ]
	I0429 19:35:07.690458   49175 command_runner.go:130] > # List of additional devices. specified as
	I0429 19:35:07.690467   49175 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0429 19:35:07.690475   49175 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0429 19:35:07.690483   49175 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0429 19:35:07.690489   49175 command_runner.go:130] > # additional_devices = [
	I0429 19:35:07.690492   49175 command_runner.go:130] > # ]
	I0429 19:35:07.690499   49175 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0429 19:35:07.690507   49175 command_runner.go:130] > # cdi_spec_dirs = [
	I0429 19:35:07.690513   49175 command_runner.go:130] > # 	"/etc/cdi",
	I0429 19:35:07.690518   49175 command_runner.go:130] > # 	"/var/run/cdi",
	I0429 19:35:07.690523   49175 command_runner.go:130] > # ]
	I0429 19:35:07.690529   49175 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0429 19:35:07.690537   49175 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0429 19:35:07.690544   49175 command_runner.go:130] > # Defaults to false.
	I0429 19:35:07.690549   49175 command_runner.go:130] > # device_ownership_from_security_context = false
	I0429 19:35:07.690562   49175 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0429 19:35:07.690571   49175 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0429 19:35:07.690582   49175 command_runner.go:130] > # hooks_dir = [
	I0429 19:35:07.690588   49175 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0429 19:35:07.690592   49175 command_runner.go:130] > # ]
	I0429 19:35:07.690597   49175 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0429 19:35:07.690605   49175 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0429 19:35:07.690611   49175 command_runner.go:130] > # its default mounts from the following two files:
	I0429 19:35:07.690617   49175 command_runner.go:130] > #
	I0429 19:35:07.690623   49175 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0429 19:35:07.690641   49175 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0429 19:35:07.690649   49175 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0429 19:35:07.690653   49175 command_runner.go:130] > #
	I0429 19:35:07.690659   49175 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0429 19:35:07.690667   49175 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0429 19:35:07.690675   49175 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0429 19:35:07.690681   49175 command_runner.go:130] > #      only add mounts it finds in this file.
	I0429 19:35:07.690687   49175 command_runner.go:130] > #
	I0429 19:35:07.690691   49175 command_runner.go:130] > # default_mounts_file = ""
	I0429 19:35:07.690698   49175 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0429 19:35:07.690706   49175 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0429 19:35:07.690712   49175 command_runner.go:130] > pids_limit = 1024
	I0429 19:35:07.690718   49175 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0429 19:35:07.690726   49175 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0429 19:35:07.690733   49175 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0429 19:35:07.690743   49175 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0429 19:35:07.690750   49175 command_runner.go:130] > # log_size_max = -1
	I0429 19:35:07.690757   49175 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0429 19:35:07.690763   49175 command_runner.go:130] > # log_to_journald = false
	I0429 19:35:07.690769   49175 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0429 19:35:07.690776   49175 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0429 19:35:07.690781   49175 command_runner.go:130] > # Path to directory for container attach sockets.
	I0429 19:35:07.690787   49175 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0429 19:35:07.690793   49175 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0429 19:35:07.690799   49175 command_runner.go:130] > # bind_mount_prefix = ""
	I0429 19:35:07.690804   49175 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0429 19:35:07.690816   49175 command_runner.go:130] > # read_only = false
	I0429 19:35:07.690825   49175 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0429 19:35:07.690833   49175 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0429 19:35:07.690840   49175 command_runner.go:130] > # live configuration reload.
	I0429 19:35:07.690844   49175 command_runner.go:130] > # log_level = "info"
	I0429 19:35:07.690852   49175 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0429 19:35:07.690857   49175 command_runner.go:130] > # This option supports live configuration reload.
	I0429 19:35:07.690863   49175 command_runner.go:130] > # log_filter = ""
	I0429 19:35:07.690869   49175 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0429 19:35:07.690878   49175 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0429 19:35:07.690884   49175 command_runner.go:130] > # separated by comma.
	I0429 19:35:07.690891   49175 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 19:35:07.690897   49175 command_runner.go:130] > # uid_mappings = ""
	I0429 19:35:07.690903   49175 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0429 19:35:07.690911   49175 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0429 19:35:07.690915   49175 command_runner.go:130] > # separated by comma.
	I0429 19:35:07.690924   49175 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 19:35:07.690933   49175 command_runner.go:130] > # gid_mappings = ""
	I0429 19:35:07.690939   49175 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0429 19:35:07.690948   49175 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0429 19:35:07.690956   49175 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0429 19:35:07.690965   49175 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 19:35:07.690971   49175 command_runner.go:130] > # minimum_mappable_uid = -1
	I0429 19:35:07.690977   49175 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0429 19:35:07.690985   49175 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0429 19:35:07.690993   49175 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0429 19:35:07.691003   49175 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 19:35:07.691009   49175 command_runner.go:130] > # minimum_mappable_gid = -1
	I0429 19:35:07.691015   49175 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0429 19:35:07.691023   49175 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0429 19:35:07.691034   49175 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0429 19:35:07.691040   49175 command_runner.go:130] > # ctr_stop_timeout = 30
	I0429 19:35:07.691045   49175 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0429 19:35:07.691053   49175 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0429 19:35:07.691060   49175 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0429 19:35:07.691065   49175 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0429 19:35:07.691077   49175 command_runner.go:130] > drop_infra_ctr = false
	I0429 19:35:07.691085   49175 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0429 19:35:07.691092   49175 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0429 19:35:07.691101   49175 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0429 19:35:07.691107   49175 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0429 19:35:07.691113   49175 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0429 19:35:07.691121   49175 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0429 19:35:07.691129   49175 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0429 19:35:07.691136   49175 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0429 19:35:07.691142   49175 command_runner.go:130] > # shared_cpuset = ""
	I0429 19:35:07.691148   49175 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0429 19:35:07.691155   49175 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0429 19:35:07.691159   49175 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0429 19:35:07.691169   49175 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0429 19:35:07.691173   49175 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0429 19:35:07.691181   49175 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0429 19:35:07.691189   49175 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0429 19:35:07.691196   49175 command_runner.go:130] > # enable_criu_support = false
	I0429 19:35:07.691201   49175 command_runner.go:130] > # Enable/disable the generation of the container,
	I0429 19:35:07.691209   49175 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0429 19:35:07.691215   49175 command_runner.go:130] > # enable_pod_events = false
	I0429 19:35:07.691221   49175 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0429 19:35:07.691230   49175 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0429 19:35:07.691238   49175 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0429 19:35:07.691243   49175 command_runner.go:130] > # default_runtime = "runc"
	I0429 19:35:07.691250   49175 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0429 19:35:07.691257   49175 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0429 19:35:07.691268   49175 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0429 19:35:07.691275   49175 command_runner.go:130] > # creation as a file is not desired either.
	I0429 19:35:07.691283   49175 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0429 19:35:07.691289   49175 command_runner.go:130] > # the hostname is being managed dynamically.
	I0429 19:35:07.691294   49175 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0429 19:35:07.691299   49175 command_runner.go:130] > # ]
	I0429 19:35:07.691305   49175 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0429 19:35:07.691313   49175 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0429 19:35:07.691322   49175 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0429 19:35:07.691334   49175 command_runner.go:130] > # Each entry in the table should follow the format:
	I0429 19:35:07.691340   49175 command_runner.go:130] > #
	I0429 19:35:07.691345   49175 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0429 19:35:07.691352   49175 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0429 19:35:07.691402   49175 command_runner.go:130] > # runtime_type = "oci"
	I0429 19:35:07.691411   49175 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0429 19:35:07.691415   49175 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0429 19:35:07.691420   49175 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0429 19:35:07.691424   49175 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0429 19:35:07.691428   49175 command_runner.go:130] > # monitor_env = []
	I0429 19:35:07.691433   49175 command_runner.go:130] > # privileged_without_host_devices = false
	I0429 19:35:07.691441   49175 command_runner.go:130] > # allowed_annotations = []
	I0429 19:35:07.691449   49175 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0429 19:35:07.691456   49175 command_runner.go:130] > # Where:
	I0429 19:35:07.691461   49175 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0429 19:35:07.691469   49175 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0429 19:35:07.691477   49175 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0429 19:35:07.691486   49175 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0429 19:35:07.691494   49175 command_runner.go:130] > #   in $PATH.
	I0429 19:35:07.691501   49175 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0429 19:35:07.691508   49175 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0429 19:35:07.691514   49175 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0429 19:35:07.691520   49175 command_runner.go:130] > #   state.
	I0429 19:35:07.691526   49175 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0429 19:35:07.691534   49175 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0429 19:35:07.691539   49175 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0429 19:35:07.691547   49175 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0429 19:35:07.691553   49175 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0429 19:35:07.691561   49175 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0429 19:35:07.691567   49175 command_runner.go:130] > #   The currently recognized values are:
	I0429 19:35:07.691574   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0429 19:35:07.691582   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0429 19:35:07.691590   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0429 19:35:07.691598   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0429 19:35:07.691605   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0429 19:35:07.691614   49175 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0429 19:35:07.691629   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0429 19:35:07.691647   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0429 19:35:07.691652   49175 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0429 19:35:07.691659   49175 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0429 19:35:07.691665   49175 command_runner.go:130] > #   deprecated option "conmon".
	I0429 19:35:07.691672   49175 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0429 19:35:07.691679   49175 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0429 19:35:07.691685   49175 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0429 19:35:07.691693   49175 command_runner.go:130] > #   should be moved to the container's cgroup
	I0429 19:35:07.691701   49175 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0429 19:35:07.691708   49175 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0429 19:35:07.691714   49175 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0429 19:35:07.691722   49175 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0429 19:35:07.691725   49175 command_runner.go:130] > #
	I0429 19:35:07.691729   49175 command_runner.go:130] > # Using the seccomp notifier feature:
	I0429 19:35:07.691737   49175 command_runner.go:130] > #
	I0429 19:35:07.691743   49175 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0429 19:35:07.691751   49175 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0429 19:35:07.691757   49175 command_runner.go:130] > #
	I0429 19:35:07.691763   49175 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0429 19:35:07.691771   49175 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0429 19:35:07.691777   49175 command_runner.go:130] > #
	I0429 19:35:07.691783   49175 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0429 19:35:07.691788   49175 command_runner.go:130] > # feature.
	I0429 19:35:07.691791   49175 command_runner.go:130] > #
	I0429 19:35:07.691799   49175 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0429 19:35:07.691805   49175 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0429 19:35:07.691814   49175 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0429 19:35:07.691822   49175 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0429 19:35:07.691831   49175 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0429 19:35:07.691834   49175 command_runner.go:130] > #
	I0429 19:35:07.691843   49175 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0429 19:35:07.691851   49175 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0429 19:35:07.691855   49175 command_runner.go:130] > #
	I0429 19:35:07.691860   49175 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0429 19:35:07.691868   49175 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0429 19:35:07.691876   49175 command_runner.go:130] > #
	I0429 19:35:07.691884   49175 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0429 19:35:07.691892   49175 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0429 19:35:07.691898   49175 command_runner.go:130] > # limitation.
	I0429 19:35:07.691903   49175 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0429 19:35:07.691910   49175 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0429 19:35:07.691914   49175 command_runner.go:130] > runtime_type = "oci"
	I0429 19:35:07.691920   49175 command_runner.go:130] > runtime_root = "/run/runc"
	I0429 19:35:07.691924   49175 command_runner.go:130] > runtime_config_path = ""
	I0429 19:35:07.691931   49175 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0429 19:35:07.691939   49175 command_runner.go:130] > monitor_cgroup = "pod"
	I0429 19:35:07.691946   49175 command_runner.go:130] > monitor_exec_cgroup = ""
	I0429 19:35:07.691949   49175 command_runner.go:130] > monitor_env = [
	I0429 19:35:07.691955   49175 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0429 19:35:07.691961   49175 command_runner.go:130] > ]
	I0429 19:35:07.691965   49175 command_runner.go:130] > privileged_without_host_devices = false
	I0429 19:35:07.691974   49175 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0429 19:35:07.691982   49175 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0429 19:35:07.691991   49175 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0429 19:35:07.692002   49175 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0429 19:35:07.692013   49175 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0429 19:35:07.692021   49175 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0429 19:35:07.692032   49175 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0429 19:35:07.692042   49175 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0429 19:35:07.692049   49175 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0429 19:35:07.692058   49175 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0429 19:35:07.692064   49175 command_runner.go:130] > # Example:
	I0429 19:35:07.692069   49175 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0429 19:35:07.692076   49175 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0429 19:35:07.692080   49175 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0429 19:35:07.692087   49175 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0429 19:35:07.692091   49175 command_runner.go:130] > # cpuset = 0
	I0429 19:35:07.692097   49175 command_runner.go:130] > # cpushares = "0-1"
	I0429 19:35:07.692100   49175 command_runner.go:130] > # Where:
	I0429 19:35:07.692107   49175 command_runner.go:130] > # The workload name is workload-type.
	I0429 19:35:07.692114   49175 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0429 19:35:07.692126   49175 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0429 19:35:07.692136   49175 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0429 19:35:07.692146   49175 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0429 19:35:07.692152   49175 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0429 19:35:07.692160   49175 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0429 19:35:07.692168   49175 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0429 19:35:07.692175   49175 command_runner.go:130] > # Default value is set to true
	I0429 19:35:07.692180   49175 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0429 19:35:07.692190   49175 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0429 19:35:07.692197   49175 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0429 19:35:07.692202   49175 command_runner.go:130] > # Default value is set to 'false'
	I0429 19:35:07.692207   49175 command_runner.go:130] > # disable_hostport_mapping = false
	I0429 19:35:07.692214   49175 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0429 19:35:07.692219   49175 command_runner.go:130] > #
	I0429 19:35:07.692224   49175 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0429 19:35:07.692230   49175 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0429 19:35:07.692236   49175 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0429 19:35:07.692242   49175 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0429 19:35:07.692249   49175 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0429 19:35:07.692252   49175 command_runner.go:130] > [crio.image]
	I0429 19:35:07.692258   49175 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0429 19:35:07.692262   49175 command_runner.go:130] > # default_transport = "docker://"
	I0429 19:35:07.692268   49175 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0429 19:35:07.692273   49175 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0429 19:35:07.692277   49175 command_runner.go:130] > # global_auth_file = ""
	I0429 19:35:07.692281   49175 command_runner.go:130] > # The image used to instantiate infra containers.
	I0429 19:35:07.692286   49175 command_runner.go:130] > # This option supports live configuration reload.
	I0429 19:35:07.692290   49175 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0429 19:35:07.692295   49175 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0429 19:35:07.692304   49175 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0429 19:35:07.692308   49175 command_runner.go:130] > # This option supports live configuration reload.
	I0429 19:35:07.692312   49175 command_runner.go:130] > # pause_image_auth_file = ""
	I0429 19:35:07.692318   49175 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0429 19:35:07.692340   49175 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0429 19:35:07.692346   49175 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0429 19:35:07.692351   49175 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0429 19:35:07.692359   49175 command_runner.go:130] > # pause_command = "/pause"
	I0429 19:35:07.692365   49175 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0429 19:35:07.692375   49175 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0429 19:35:07.692380   49175 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0429 19:35:07.692388   49175 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0429 19:35:07.692394   49175 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0429 19:35:07.692399   49175 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0429 19:35:07.692402   49175 command_runner.go:130] > # pinned_images = [
	I0429 19:35:07.692406   49175 command_runner.go:130] > # ]
	I0429 19:35:07.692411   49175 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0429 19:35:07.692417   49175 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0429 19:35:07.692423   49175 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0429 19:35:07.692429   49175 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0429 19:35:07.692437   49175 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0429 19:35:07.692440   49175 command_runner.go:130] > # signature_policy = ""
	I0429 19:35:07.692445   49175 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0429 19:35:07.692453   49175 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0429 19:35:07.692460   49175 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0429 19:35:07.692470   49175 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0429 19:35:07.692478   49175 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0429 19:35:07.692485   49175 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0429 19:35:07.692491   49175 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0429 19:35:07.692499   49175 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0429 19:35:07.692505   49175 command_runner.go:130] > # changing them here.
	I0429 19:35:07.692510   49175 command_runner.go:130] > # insecure_registries = [
	I0429 19:35:07.692515   49175 command_runner.go:130] > # ]
	I0429 19:35:07.692522   49175 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0429 19:35:07.692529   49175 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0429 19:35:07.692533   49175 command_runner.go:130] > # image_volumes = "mkdir"
	I0429 19:35:07.692538   49175 command_runner.go:130] > # Temporary directory to use for storing big files
	I0429 19:35:07.692544   49175 command_runner.go:130] > # big_files_temporary_dir = ""
	I0429 19:35:07.692550   49175 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0429 19:35:07.692556   49175 command_runner.go:130] > # CNI plugins.
	I0429 19:35:07.692560   49175 command_runner.go:130] > [crio.network]
	I0429 19:35:07.692569   49175 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0429 19:35:07.692576   49175 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0429 19:35:07.692585   49175 command_runner.go:130] > # cni_default_network = ""
	I0429 19:35:07.692593   49175 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0429 19:35:07.692598   49175 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0429 19:35:07.692606   49175 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0429 19:35:07.692612   49175 command_runner.go:130] > # plugin_dirs = [
	I0429 19:35:07.692615   49175 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0429 19:35:07.692621   49175 command_runner.go:130] > # ]
	I0429 19:35:07.692626   49175 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0429 19:35:07.692631   49175 command_runner.go:130] > [crio.metrics]
	I0429 19:35:07.692642   49175 command_runner.go:130] > # Globally enable or disable metrics support.
	I0429 19:35:07.692646   49175 command_runner.go:130] > enable_metrics = true
	I0429 19:35:07.692650   49175 command_runner.go:130] > # Specify enabled metrics collectors.
	I0429 19:35:07.692657   49175 command_runner.go:130] > # Per default all metrics are enabled.
	I0429 19:35:07.692663   49175 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0429 19:35:07.692671   49175 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0429 19:35:07.692679   49175 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0429 19:35:07.692683   49175 command_runner.go:130] > # metrics_collectors = [
	I0429 19:35:07.692689   49175 command_runner.go:130] > # 	"operations",
	I0429 19:35:07.692694   49175 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0429 19:35:07.692701   49175 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0429 19:35:07.692705   49175 command_runner.go:130] > # 	"operations_errors",
	I0429 19:35:07.692711   49175 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0429 19:35:07.692715   49175 command_runner.go:130] > # 	"image_pulls_by_name",
	I0429 19:35:07.692719   49175 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0429 19:35:07.692728   49175 command_runner.go:130] > # 	"image_pulls_failures",
	I0429 19:35:07.692735   49175 command_runner.go:130] > # 	"image_pulls_successes",
	I0429 19:35:07.692739   49175 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0429 19:35:07.692746   49175 command_runner.go:130] > # 	"image_layer_reuse",
	I0429 19:35:07.692750   49175 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0429 19:35:07.692756   49175 command_runner.go:130] > # 	"containers_oom_total",
	I0429 19:35:07.692760   49175 command_runner.go:130] > # 	"containers_oom",
	I0429 19:35:07.692766   49175 command_runner.go:130] > # 	"processes_defunct",
	I0429 19:35:07.692770   49175 command_runner.go:130] > # 	"operations_total",
	I0429 19:35:07.692776   49175 command_runner.go:130] > # 	"operations_latency_seconds",
	I0429 19:35:07.692781   49175 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0429 19:35:07.692787   49175 command_runner.go:130] > # 	"operations_errors_total",
	I0429 19:35:07.692796   49175 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0429 19:35:07.692804   49175 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0429 19:35:07.692808   49175 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0429 19:35:07.692814   49175 command_runner.go:130] > # 	"image_pulls_success_total",
	I0429 19:35:07.692819   49175 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0429 19:35:07.692823   49175 command_runner.go:130] > # 	"containers_oom_count_total",
	I0429 19:35:07.692828   49175 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0429 19:35:07.692835   49175 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0429 19:35:07.692838   49175 command_runner.go:130] > # ]
	I0429 19:35:07.692846   49175 command_runner.go:130] > # The port on which the metrics server will listen.
	I0429 19:35:07.692850   49175 command_runner.go:130] > # metrics_port = 9090
	I0429 19:35:07.692858   49175 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0429 19:35:07.692861   49175 command_runner.go:130] > # metrics_socket = ""
	I0429 19:35:07.692869   49175 command_runner.go:130] > # The certificate for the secure metrics server.
	I0429 19:35:07.692875   49175 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0429 19:35:07.692883   49175 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0429 19:35:07.692890   49175 command_runner.go:130] > # certificate on any modification event.
	I0429 19:35:07.692894   49175 command_runner.go:130] > # metrics_cert = ""
	I0429 19:35:07.692902   49175 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0429 19:35:07.692907   49175 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0429 19:35:07.692913   49175 command_runner.go:130] > # metrics_key = ""
	I0429 19:35:07.692918   49175 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0429 19:35:07.692924   49175 command_runner.go:130] > [crio.tracing]
	I0429 19:35:07.692930   49175 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0429 19:35:07.692937   49175 command_runner.go:130] > # enable_tracing = false
	I0429 19:35:07.692942   49175 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0429 19:35:07.692949   49175 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0429 19:35:07.692956   49175 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0429 19:35:07.692963   49175 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0429 19:35:07.692967   49175 command_runner.go:130] > # CRI-O NRI configuration.
	I0429 19:35:07.692973   49175 command_runner.go:130] > [crio.nri]
	I0429 19:35:07.692978   49175 command_runner.go:130] > # Globally enable or disable NRI.
	I0429 19:35:07.692983   49175 command_runner.go:130] > # enable_nri = false
	I0429 19:35:07.692990   49175 command_runner.go:130] > # NRI socket to listen on.
	I0429 19:35:07.692997   49175 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0429 19:35:07.693001   49175 command_runner.go:130] > # NRI plugin directory to use.
	I0429 19:35:07.693013   49175 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0429 19:35:07.693020   49175 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0429 19:35:07.693025   49175 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0429 19:35:07.693033   49175 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0429 19:35:07.693040   49175 command_runner.go:130] > # nri_disable_connections = false
	I0429 19:35:07.693045   49175 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0429 19:35:07.693052   49175 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0429 19:35:07.693057   49175 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0429 19:35:07.693063   49175 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0429 19:35:07.693069   49175 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0429 19:35:07.693074   49175 command_runner.go:130] > [crio.stats]
	I0429 19:35:07.693084   49175 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0429 19:35:07.693092   49175 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0429 19:35:07.693096   49175 command_runner.go:130] > # stats_collection_period = 0
	I0429 19:35:07.693288   49175 cni.go:84] Creating CNI manager for ""
	I0429 19:35:07.693305   49175 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 19:35:07.693336   49175 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 19:35:07.693362   49175 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.127 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-773806 NodeName:multinode-773806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 19:35:07.693494   49175 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-773806"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.127
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.127"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 19:35:07.693562   49175 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:35:07.706246   49175 command_runner.go:130] > kubeadm
	I0429 19:35:07.706269   49175 command_runner.go:130] > kubectl
	I0429 19:35:07.706274   49175 command_runner.go:130] > kubelet
	I0429 19:35:07.706294   49175 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 19:35:07.706338   49175 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 19:35:07.718079   49175 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 19:35:07.737985   49175 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:35:07.770879   49175 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0429 19:35:07.807852   49175 ssh_runner.go:195] Run: grep 192.168.39.127	control-plane.minikube.internal$ /etc/hosts
	I0429 19:35:07.813017   49175 command_runner.go:130] > 192.168.39.127	control-plane.minikube.internal
	I0429 19:35:07.813096   49175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:35:07.961630   49175 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:35:07.979508   49175 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806 for IP: 192.168.39.127
	I0429 19:35:07.979531   49175 certs.go:194] generating shared ca certs ...
	I0429 19:35:07.979551   49175 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:35:07.979707   49175 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:35:07.979774   49175 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:35:07.979789   49175 certs.go:256] generating profile certs ...
	I0429 19:35:07.979890   49175 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/client.key
	I0429 19:35:07.979977   49175 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/apiserver.key.a5d6a352
	I0429 19:35:07.980030   49175 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/proxy-client.key
	I0429 19:35:07.980043   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:35:07.980064   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:35:07.980081   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:35:07.980097   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:35:07.980115   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:35:07.980133   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:35:07.980153   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:35:07.980169   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:35:07.980228   49175 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:35:07.980294   49175 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:35:07.980308   49175 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:35:07.980339   49175 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:35:07.980385   49175 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:35:07.980415   49175 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:35:07.980467   49175 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:35:07.980509   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:35:07.980527   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem -> /usr/share/ca-certificates/15124.pem
	I0429 19:35:07.980541   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /usr/share/ca-certificates/151242.pem
	I0429 19:35:07.981294   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:35:08.009976   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:35:08.038420   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:35:08.067069   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:35:08.095154   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 19:35:08.120666   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 19:35:08.150089   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:35:08.178991   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 19:35:08.208626   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:35:08.236974   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:35:08.264473   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:35:08.292510   49175 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 19:35:08.311420   49175 ssh_runner.go:195] Run: openssl version
	I0429 19:35:08.317914   49175 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 19:35:08.317992   49175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:35:08.329630   49175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:35:08.334743   49175 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:35:08.334767   49175 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:35:08.334818   49175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:35:08.340731   49175 command_runner.go:130] > b5213941
	I0429 19:35:08.340905   49175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:35:08.350777   49175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:35:08.362197   49175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:35:08.367049   49175 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:35:08.367068   49175 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:35:08.367099   49175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:35:08.373161   49175 command_runner.go:130] > 51391683
	I0429 19:35:08.373206   49175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:35:08.383385   49175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:35:08.396637   49175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:35:08.401890   49175 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:35:08.402115   49175 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:35:08.402161   49175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:35:08.408789   49175 command_runner.go:130] > 3ec20f2e
	I0429 19:35:08.408956   49175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:35:08.420245   49175 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:35:08.425242   49175 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:35:08.425272   49175 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0429 19:35:08.425281   49175 command_runner.go:130] > Device: 253,1	Inode: 9433622     Links: 1
	I0429 19:35:08.425290   49175 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 19:35:08.425299   49175 command_runner.go:130] > Access: 2024-04-29 19:28:47.186812513 +0000
	I0429 19:35:08.425306   49175 command_runner.go:130] > Modify: 2024-04-29 19:28:47.186812513 +0000
	I0429 19:35:08.425314   49175 command_runner.go:130] > Change: 2024-04-29 19:28:47.186812513 +0000
	I0429 19:35:08.425322   49175 command_runner.go:130] >  Birth: 2024-04-29 19:28:47.186812513 +0000
	I0429 19:35:08.425440   49175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 19:35:08.432283   49175 command_runner.go:130] > Certificate will not expire
	I0429 19:35:08.432361   49175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 19:35:08.438794   49175 command_runner.go:130] > Certificate will not expire
	I0429 19:35:08.438855   49175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 19:35:08.444991   49175 command_runner.go:130] > Certificate will not expire
	I0429 19:35:08.445052   49175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 19:35:08.451116   49175 command_runner.go:130] > Certificate will not expire
	I0429 19:35:08.451165   49175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 19:35:08.457133   49175 command_runner.go:130] > Certificate will not expire
	I0429 19:35:08.457197   49175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 19:35:08.463347   49175 command_runner.go:130] > Certificate will not expire
	I0429 19:35:08.463418   49175 kubeadm.go:391] StartCluster: {Name:multinode-773806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-773806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:35:08.463553   49175 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 19:35:08.463624   49175 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 19:35:08.508113   49175 command_runner.go:130] > e1a626f59ab5873b1c7e06e8347139a4f3f9851df447bfeab7fb730a33cb663e
	I0429 19:35:08.508139   49175 command_runner.go:130] > 46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18
	I0429 19:35:08.508145   49175 command_runner.go:130] > 19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01
	I0429 19:35:08.508152   49175 command_runner.go:130] > 305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35
	I0429 19:35:08.508157   49175 command_runner.go:130] > e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7
	I0429 19:35:08.508163   49175 command_runner.go:130] > 6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e
	I0429 19:35:08.508172   49175 command_runner.go:130] > 28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe
	I0429 19:35:08.508184   49175 command_runner.go:130] > bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d
	I0429 19:35:08.508206   49175 cri.go:89] found id: "e1a626f59ab5873b1c7e06e8347139a4f3f9851df447bfeab7fb730a33cb663e"
	I0429 19:35:08.508221   49175 cri.go:89] found id: "46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18"
	I0429 19:35:08.508225   49175 cri.go:89] found id: "19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01"
	I0429 19:35:08.508227   49175 cri.go:89] found id: "305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35"
	I0429 19:35:08.508230   49175 cri.go:89] found id: "e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7"
	I0429 19:35:08.508233   49175 cri.go:89] found id: "6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e"
	I0429 19:35:08.508236   49175 cri.go:89] found id: "28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe"
	I0429 19:35:08.508238   49175 cri.go:89] found id: "bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d"
	I0429 19:35:08.508240   49175 cri.go:89] found id: ""
	I0429 19:35:08.508289   49175 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.599228060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714419399599149590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0269571b-886f-4b32-9209-0e90b3b1804d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.599814080Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6c917e2-4dd2-43d4-9068-092e392b2286 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.599897381Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6c917e2-4dd2-43d4-9068-092e392b2286 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.600296433Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2fb326dbcef57d3bbe95233b16e022fd5fd3bae33ebe5c87a0f51055bc8ba80,PodSandboxId:a1a2e94cb6ac094ec3b9afe7a6c834b99be78ab0c64491ac723c2f3348dbf2ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714419349345422670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44b8ef5992602486837e2ea2c56864636442ed442c246e5a5b9bb93be932e23,PodSandboxId:75563ac3377fd24238989285dcc59268e3e68a7f3ac2bf979f9aa274e632cb71,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714419315781447172,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9942452293a11f80f22b277a2fcee01abf0e38a51bb3f6b45ddf1dc524b557c,PodSandboxId:db8694fe181b12d57d9f8ad1388d2877a27870b9d79d25be37cb341800d19d64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714419315841918559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23b20c1a1888c715e25c28dfd27a4f61f8d433f9e836b9c39c6ca7f3ca0e7e8,PodSandboxId:e08d32d1c554ab6ee30b17103ecab11ce8b4285dfb14df434c78f7cf90ab90af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714419315681113024,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13-512c5d336ba7,},Annotations:map[string]
string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e6a58f579243e6cb3e6f6861dd1bf66e9ee1f4ded82d6a10d8f7cd75afd355,PodSandboxId:4827f71827df8e22f2250ea6970f6a61ce0670ad91924c0f52353449cfb3e929,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714419315583586336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd171b7365ef28c752b6dbfa8eeb2824617f2c787b80af5ed48d968ff20b759d,PodSandboxId:8174c871a80838577b4f378024621f1af603736df3ca9b693241b14941cce240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714419310832413911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string{io.kubernetes.container.hash: a99f5bf3,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158117fc5586ddd5f255b607d0890364bb2620e5f780e3a30ca08d378dd8fe43,PodSandboxId:0e348a729fef589e316cd04ed9245bbd2519fb2105fbcfd5ed2b2313bcbaeb26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714419310758349616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c33
51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f524cad554a80a5d6a27ba6563ea8c8f621a795a1c50623338c8fe8a4115da,PodSandboxId:9fdd8a3bf7b4dff2043f01be84ceb0a9d0ade12d113d067ba3dbfba615de478b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714419310804105150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:312d2cc38cb7921577370967c3e1f1355c1f3e19a6e1ebea1e5999e69c8051c0,PodSandboxId:5808bb5d0b52c2b6dcd28fa3fa0dc470cbb95cd8b346386727d82a0301a6cf36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714419310709972037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash: aa7fe539,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc0ee6bf1c03cbcbd4ea4e5e6c9c2987263bd71212a7b23368d9db518e3ee6c,PodSandboxId:17c1759c31d692f9a1470aaeddd37ee4d782a38b9a37d65fe7d268921c5f9769,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714419004298773183,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a626f59ab5873b1c7e06e8347139a4f3f9851df447bfeab7fb730a33cb663e,PodSandboxId:49b427cb0ae262db48c72ae12d892b4ce23714e79d39be3d0f35b13099ea33c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714418953469366757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.kubernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18,PodSandboxId:c358abeb705fe27b6a791b10ec94d1e5828461489d28558b394000231adb4b11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714418952426483579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01,PodSandboxId:7351f900961919b09ee26ab9d5462cb8c1299c10ed067fc93a0598d12586b2b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714418951015992455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35,PodSandboxId:8df979e0df5a6155c590f8fc519306e7a0e281480e2c8436ede54e4efe5bb98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714418950702509128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13
-512c5d336ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e,PodSandboxId:5459600487f294a104c1c7cb36f5789086d522e13fb1ac3a8f05a968d807cef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714418930908278427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string
{io.kubernetes.container.hash: a99f5bf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7,PodSandboxId:54315db19ed4f14de6fecfa2d7ad4da6365acd618a5e499021386541c4ffc12f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714418930914531932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.
container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe,PodSandboxId:423ec7fceda9b25192a04cb7f9665345a665bc725ed13d676cbd75238fdd5c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714418930824968380,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash:
aa7fe539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d,PodSandboxId:ca30f74c7f5dd7894b5c7a3709754dc478c207446f3e2aeade363d17f1f4f653,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714418930818106797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6c917e2-4dd2-43d4-9068-092e392b2286 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.648849334Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=823c5974-36d6-49a0-a502-4e50c0a8941d name=/runtime.v1.RuntimeService/Version
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.649039648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=823c5974-36d6-49a0-a502-4e50c0a8941d name=/runtime.v1.RuntimeService/Version
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.655831369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b2305f7-c869-46b4-9ba3-8e346b84e629 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.656642492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714419399656617210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b2305f7-c869-46b4-9ba3-8e346b84e629 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.657381789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d00fc01-ff49-4fb0-8c07-f5ee6a53650e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.657465224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d00fc01-ff49-4fb0-8c07-f5ee6a53650e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.658241116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2fb326dbcef57d3bbe95233b16e022fd5fd3bae33ebe5c87a0f51055bc8ba80,PodSandboxId:a1a2e94cb6ac094ec3b9afe7a6c834b99be78ab0c64491ac723c2f3348dbf2ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714419349345422670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44b8ef5992602486837e2ea2c56864636442ed442c246e5a5b9bb93be932e23,PodSandboxId:75563ac3377fd24238989285dcc59268e3e68a7f3ac2bf979f9aa274e632cb71,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714419315781447172,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9942452293a11f80f22b277a2fcee01abf0e38a51bb3f6b45ddf1dc524b557c,PodSandboxId:db8694fe181b12d57d9f8ad1388d2877a27870b9d79d25be37cb341800d19d64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714419315841918559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23b20c1a1888c715e25c28dfd27a4f61f8d433f9e836b9c39c6ca7f3ca0e7e8,PodSandboxId:e08d32d1c554ab6ee30b17103ecab11ce8b4285dfb14df434c78f7cf90ab90af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714419315681113024,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13-512c5d336ba7,},Annotations:map[string]
string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e6a58f579243e6cb3e6f6861dd1bf66e9ee1f4ded82d6a10d8f7cd75afd355,PodSandboxId:4827f71827df8e22f2250ea6970f6a61ce0670ad91924c0f52353449cfb3e929,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714419315583586336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd171b7365ef28c752b6dbfa8eeb2824617f2c787b80af5ed48d968ff20b759d,PodSandboxId:8174c871a80838577b4f378024621f1af603736df3ca9b693241b14941cce240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714419310832413911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string{io.kubernetes.container.hash: a99f5bf3,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158117fc5586ddd5f255b607d0890364bb2620e5f780e3a30ca08d378dd8fe43,PodSandboxId:0e348a729fef589e316cd04ed9245bbd2519fb2105fbcfd5ed2b2313bcbaeb26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714419310758349616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c33
51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f524cad554a80a5d6a27ba6563ea8c8f621a795a1c50623338c8fe8a4115da,PodSandboxId:9fdd8a3bf7b4dff2043f01be84ceb0a9d0ade12d113d067ba3dbfba615de478b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714419310804105150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:312d2cc38cb7921577370967c3e1f1355c1f3e19a6e1ebea1e5999e69c8051c0,PodSandboxId:5808bb5d0b52c2b6dcd28fa3fa0dc470cbb95cd8b346386727d82a0301a6cf36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714419310709972037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash: aa7fe539,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc0ee6bf1c03cbcbd4ea4e5e6c9c2987263bd71212a7b23368d9db518e3ee6c,PodSandboxId:17c1759c31d692f9a1470aaeddd37ee4d782a38b9a37d65fe7d268921c5f9769,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714419004298773183,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a626f59ab5873b1c7e06e8347139a4f3f9851df447bfeab7fb730a33cb663e,PodSandboxId:49b427cb0ae262db48c72ae12d892b4ce23714e79d39be3d0f35b13099ea33c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714418953469366757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.kubernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18,PodSandboxId:c358abeb705fe27b6a791b10ec94d1e5828461489d28558b394000231adb4b11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714418952426483579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01,PodSandboxId:7351f900961919b09ee26ab9d5462cb8c1299c10ed067fc93a0598d12586b2b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714418951015992455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35,PodSandboxId:8df979e0df5a6155c590f8fc519306e7a0e281480e2c8436ede54e4efe5bb98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714418950702509128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13
-512c5d336ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e,PodSandboxId:5459600487f294a104c1c7cb36f5789086d522e13fb1ac3a8f05a968d807cef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714418930908278427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string
{io.kubernetes.container.hash: a99f5bf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7,PodSandboxId:54315db19ed4f14de6fecfa2d7ad4da6365acd618a5e499021386541c4ffc12f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714418930914531932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.
container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe,PodSandboxId:423ec7fceda9b25192a04cb7f9665345a665bc725ed13d676cbd75238fdd5c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714418930824968380,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash:
aa7fe539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d,PodSandboxId:ca30f74c7f5dd7894b5c7a3709754dc478c207446f3e2aeade363d17f1f4f653,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714418930818106797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d00fc01-ff49-4fb0-8c07-f5ee6a53650e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.708120814Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=362cf37b-2fef-4671-ba40-854539a2cb8d name=/runtime.v1.RuntimeService/Version
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.708293576Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=362cf37b-2fef-4671-ba40-854539a2cb8d name=/runtime.v1.RuntimeService/Version
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.709625887Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fff91a78-7a08-4f16-9182-3c127892ab31 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.710018027Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714419399709995458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fff91a78-7a08-4f16-9182-3c127892ab31 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.710982932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efaea8f4-ddd1-4222-b68e-582e282fe7e1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.711073839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efaea8f4-ddd1-4222-b68e-582e282fe7e1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.711517805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2fb326dbcef57d3bbe95233b16e022fd5fd3bae33ebe5c87a0f51055bc8ba80,PodSandboxId:a1a2e94cb6ac094ec3b9afe7a6c834b99be78ab0c64491ac723c2f3348dbf2ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714419349345422670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44b8ef5992602486837e2ea2c56864636442ed442c246e5a5b9bb93be932e23,PodSandboxId:75563ac3377fd24238989285dcc59268e3e68a7f3ac2bf979f9aa274e632cb71,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714419315781447172,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9942452293a11f80f22b277a2fcee01abf0e38a51bb3f6b45ddf1dc524b557c,PodSandboxId:db8694fe181b12d57d9f8ad1388d2877a27870b9d79d25be37cb341800d19d64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714419315841918559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23b20c1a1888c715e25c28dfd27a4f61f8d433f9e836b9c39c6ca7f3ca0e7e8,PodSandboxId:e08d32d1c554ab6ee30b17103ecab11ce8b4285dfb14df434c78f7cf90ab90af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714419315681113024,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13-512c5d336ba7,},Annotations:map[string]
string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e6a58f579243e6cb3e6f6861dd1bf66e9ee1f4ded82d6a10d8f7cd75afd355,PodSandboxId:4827f71827df8e22f2250ea6970f6a61ce0670ad91924c0f52353449cfb3e929,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714419315583586336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd171b7365ef28c752b6dbfa8eeb2824617f2c787b80af5ed48d968ff20b759d,PodSandboxId:8174c871a80838577b4f378024621f1af603736df3ca9b693241b14941cce240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714419310832413911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string{io.kubernetes.container.hash: a99f5bf3,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158117fc5586ddd5f255b607d0890364bb2620e5f780e3a30ca08d378dd8fe43,PodSandboxId:0e348a729fef589e316cd04ed9245bbd2519fb2105fbcfd5ed2b2313bcbaeb26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714419310758349616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c33
51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f524cad554a80a5d6a27ba6563ea8c8f621a795a1c50623338c8fe8a4115da,PodSandboxId:9fdd8a3bf7b4dff2043f01be84ceb0a9d0ade12d113d067ba3dbfba615de478b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714419310804105150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:312d2cc38cb7921577370967c3e1f1355c1f3e19a6e1ebea1e5999e69c8051c0,PodSandboxId:5808bb5d0b52c2b6dcd28fa3fa0dc470cbb95cd8b346386727d82a0301a6cf36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714419310709972037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash: aa7fe539,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc0ee6bf1c03cbcbd4ea4e5e6c9c2987263bd71212a7b23368d9db518e3ee6c,PodSandboxId:17c1759c31d692f9a1470aaeddd37ee4d782a38b9a37d65fe7d268921c5f9769,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714419004298773183,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a626f59ab5873b1c7e06e8347139a4f3f9851df447bfeab7fb730a33cb663e,PodSandboxId:49b427cb0ae262db48c72ae12d892b4ce23714e79d39be3d0f35b13099ea33c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714418953469366757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.kubernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18,PodSandboxId:c358abeb705fe27b6a791b10ec94d1e5828461489d28558b394000231adb4b11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714418952426483579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01,PodSandboxId:7351f900961919b09ee26ab9d5462cb8c1299c10ed067fc93a0598d12586b2b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714418951015992455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35,PodSandboxId:8df979e0df5a6155c590f8fc519306e7a0e281480e2c8436ede54e4efe5bb98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714418950702509128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13
-512c5d336ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e,PodSandboxId:5459600487f294a104c1c7cb36f5789086d522e13fb1ac3a8f05a968d807cef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714418930908278427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string
{io.kubernetes.container.hash: a99f5bf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7,PodSandboxId:54315db19ed4f14de6fecfa2d7ad4da6365acd618a5e499021386541c4ffc12f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714418930914531932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.
container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe,PodSandboxId:423ec7fceda9b25192a04cb7f9665345a665bc725ed13d676cbd75238fdd5c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714418930824968380,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash:
aa7fe539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d,PodSandboxId:ca30f74c7f5dd7894b5c7a3709754dc478c207446f3e2aeade363d17f1f4f653,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714418930818106797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efaea8f4-ddd1-4222-b68e-582e282fe7e1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.762085916Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7148be8-569d-4921-a85a-c5916e737cff name=/runtime.v1.RuntimeService/Version
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.762275654Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7148be8-569d-4921-a85a-c5916e737cff name=/runtime.v1.RuntimeService/Version
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.763853330Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d055ae7c-ed3c-4a8f-a266-e8be1039f3f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.764551453Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714419399764524958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d055ae7c-ed3c-4a8f-a266-e8be1039f3f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.765509262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70d90618-ce9f-4aab-9830-b6a8fe84dd1b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.765566954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70d90618-ce9f-4aab-9830-b6a8fe84dd1b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:36:39 multinode-773806 crio[2847]: time="2024-04-29 19:36:39.765923517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2fb326dbcef57d3bbe95233b16e022fd5fd3bae33ebe5c87a0f51055bc8ba80,PodSandboxId:a1a2e94cb6ac094ec3b9afe7a6c834b99be78ab0c64491ac723c2f3348dbf2ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714419349345422670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44b8ef5992602486837e2ea2c56864636442ed442c246e5a5b9bb93be932e23,PodSandboxId:75563ac3377fd24238989285dcc59268e3e68a7f3ac2bf979f9aa274e632cb71,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714419315781447172,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9942452293a11f80f22b277a2fcee01abf0e38a51bb3f6b45ddf1dc524b557c,PodSandboxId:db8694fe181b12d57d9f8ad1388d2877a27870b9d79d25be37cb341800d19d64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714419315841918559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23b20c1a1888c715e25c28dfd27a4f61f8d433f9e836b9c39c6ca7f3ca0e7e8,PodSandboxId:e08d32d1c554ab6ee30b17103ecab11ce8b4285dfb14df434c78f7cf90ab90af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714419315681113024,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13-512c5d336ba7,},Annotations:map[string]
string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e6a58f579243e6cb3e6f6861dd1bf66e9ee1f4ded82d6a10d8f7cd75afd355,PodSandboxId:4827f71827df8e22f2250ea6970f6a61ce0670ad91924c0f52353449cfb3e929,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714419315583586336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd171b7365ef28c752b6dbfa8eeb2824617f2c787b80af5ed48d968ff20b759d,PodSandboxId:8174c871a80838577b4f378024621f1af603736df3ca9b693241b14941cce240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714419310832413911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string{io.kubernetes.container.hash: a99f5bf3,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158117fc5586ddd5f255b607d0890364bb2620e5f780e3a30ca08d378dd8fe43,PodSandboxId:0e348a729fef589e316cd04ed9245bbd2519fb2105fbcfd5ed2b2313bcbaeb26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714419310758349616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c33
51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f524cad554a80a5d6a27ba6563ea8c8f621a795a1c50623338c8fe8a4115da,PodSandboxId:9fdd8a3bf7b4dff2043f01be84ceb0a9d0ade12d113d067ba3dbfba615de478b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714419310804105150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:312d2cc38cb7921577370967c3e1f1355c1f3e19a6e1ebea1e5999e69c8051c0,PodSandboxId:5808bb5d0b52c2b6dcd28fa3fa0dc470cbb95cd8b346386727d82a0301a6cf36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714419310709972037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash: aa7fe539,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc0ee6bf1c03cbcbd4ea4e5e6c9c2987263bd71212a7b23368d9db518e3ee6c,PodSandboxId:17c1759c31d692f9a1470aaeddd37ee4d782a38b9a37d65fe7d268921c5f9769,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714419004298773183,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a626f59ab5873b1c7e06e8347139a4f3f9851df447bfeab7fb730a33cb663e,PodSandboxId:49b427cb0ae262db48c72ae12d892b4ce23714e79d39be3d0f35b13099ea33c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714418953469366757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.kubernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18,PodSandboxId:c358abeb705fe27b6a791b10ec94d1e5828461489d28558b394000231adb4b11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714418952426483579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01,PodSandboxId:7351f900961919b09ee26ab9d5462cb8c1299c10ed067fc93a0598d12586b2b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714418951015992455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35,PodSandboxId:8df979e0df5a6155c590f8fc519306e7a0e281480e2c8436ede54e4efe5bb98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714418950702509128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13
-512c5d336ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e,PodSandboxId:5459600487f294a104c1c7cb36f5789086d522e13fb1ac3a8f05a968d807cef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714418930908278427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string
{io.kubernetes.container.hash: a99f5bf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7,PodSandboxId:54315db19ed4f14de6fecfa2d7ad4da6365acd618a5e499021386541c4ffc12f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714418930914531932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.
container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe,PodSandboxId:423ec7fceda9b25192a04cb7f9665345a665bc725ed13d676cbd75238fdd5c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714418930824968380,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash:
aa7fe539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d,PodSandboxId:ca30f74c7f5dd7894b5c7a3709754dc478c207446f3e2aeade363d17f1f4f653,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714418930818106797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70d90618-ce9f-4aab-9830-b6a8fe84dd1b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a2fb326dbcef5       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      50 seconds ago       Running             busybox                   1                   a1a2e94cb6ac0       busybox-fc5497c4f-b9pvl
	d9942452293a1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   db8694fe181b1       coredns-7db6d8ff4d-vdv7z
	f44b8ef599260       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   75563ac3377fd       kindnet-vdl58
	a23b20c1a1888       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   e08d32d1c554a       kube-proxy-vfsvr
	33e6a58f57924       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   4827f71827df8       storage-provisioner
	dd171b7365ef2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   8174c871a8083       etcd-multinode-773806
	27f524cad554a       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   9fdd8a3bf7b4d       kube-scheduler-multinode-773806
	158117fc5586d       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   1                   0e348a729fef5       kube-controller-manager-multinode-773806
	312d2cc38cb79       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            1                   5808bb5d0b52c       kube-apiserver-multinode-773806
	6bc0ee6bf1c03       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   17c1759c31d69       busybox-fc5497c4f-b9pvl
	e1a626f59ab58       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   49b427cb0ae26       storage-provisioner
	46ad3d852252a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   c358abeb705fe       coredns-7db6d8ff4d-vdv7z
	19c5032fd428a       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   7351f90096191       kindnet-vdl58
	305781b9713c9       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago        Exited              kube-proxy                0                   8df979e0df5a6       kube-proxy-vfsvr
	e81cb921a76b2       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago        Exited              kube-scheduler            0                   54315db19ed4f       kube-scheduler-multinode-773806
	6fb17aa0e298d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   5459600487f29       etcd-multinode-773806
	28805d1b207fa       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago        Exited              kube-apiserver            0                   423ec7fceda9b       kube-apiserver-multinode-773806
	bbd23693658e9       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago        Exited              kube-controller-manager   0                   ca30f74c7f5dd       kube-controller-manager-multinode-773806
	
	
	==> coredns [46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18] <==
	[INFO] 10.244.1.2:59402 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002177369s
	[INFO] 10.244.1.2:53557 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165989s
	[INFO] 10.244.1.2:48817 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096944s
	[INFO] 10.244.1.2:46437 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001598607s
	[INFO] 10.244.1.2:37562 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170251s
	[INFO] 10.244.1.2:49910 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104308s
	[INFO] 10.244.1.2:56068 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00019488s
	[INFO] 10.244.0.3:33773 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001788s
	[INFO] 10.244.0.3:50988 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015534s
	[INFO] 10.244.0.3:32923 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086513s
	[INFO] 10.244.0.3:35251 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121138s
	[INFO] 10.244.1.2:41674 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142798s
	[INFO] 10.244.1.2:52916 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177584s
	[INFO] 10.244.1.2:37672 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170818s
	[INFO] 10.244.1.2:36877 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091381s
	[INFO] 10.244.0.3:44049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209014s
	[INFO] 10.244.0.3:57474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141919s
	[INFO] 10.244.0.3:45582 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000067412s
	[INFO] 10.244.0.3:56382 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000067851s
	[INFO] 10.244.1.2:33931 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017992s
	[INFO] 10.244.1.2:33361 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00024964s
	[INFO] 10.244.1.2:48270 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107161s
	[INFO] 10.244.1.2:53778 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174088s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d9942452293a11f80f22b277a2fcee01abf0e38a51bb3f6b45ddf1dc524b557c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39128 - 857 "HINFO IN 2565273504250767231.420983194396387205. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009530349s
	
	
	==> describe nodes <==
	Name:               multinode-773806
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-773806
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-773806
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T19_28_57_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:28:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-773806
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:36:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:35:14 +0000   Mon, 29 Apr 2024 19:28:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:35:14 +0000   Mon, 29 Apr 2024 19:28:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:35:14 +0000   Mon, 29 Apr 2024 19:28:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:35:14 +0000   Mon, 29 Apr 2024 19:29:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    multinode-773806
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 881b1ba426f74211885cec1846e7f341
	  System UUID:                881b1ba4-26f7-4211-885c-ec1846e7f341
	  Boot ID:                    d39b36e4-9198-4524-be10-914010bd2df8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-b9pvl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m40s
	  kube-system                 coredns-7db6d8ff4d-vdv7z                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m31s
	  kube-system                 etcd-multinode-773806                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m44s
	  kube-system                 kindnet-vdl58                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m30s
	  kube-system                 kube-apiserver-multinode-773806             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m44s
	  kube-system                 kube-controller-manager-multinode-773806    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-proxy-vfsvr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	  kube-system                 kube-scheduler-multinode-773806             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m44s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m28s              kube-proxy       
	  Normal  Starting                 83s                kube-proxy       
	  Normal  NodeHasSufficientPID     7m44s              kubelet          Node multinode-773806 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m44s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m44s              kubelet          Node multinode-773806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m44s              kubelet          Node multinode-773806 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m44s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m31s              node-controller  Node multinode-773806 event: Registered Node multinode-773806 in Controller
	  Normal  NodeReady                7m29s              kubelet          Node multinode-773806 status is now: NodeReady
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  90s (x8 over 90s)  kubelet          Node multinode-773806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s (x8 over 90s)  kubelet          Node multinode-773806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s (x7 over 90s)  kubelet          Node multinode-773806 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           73s                node-controller  Node multinode-773806 event: Registered Node multinode-773806 in Controller
	
	
	Name:               multinode-773806-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-773806-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-773806
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_35_57_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:35:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-773806-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:36:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:36:27 +0000   Mon, 29 Apr 2024 19:35:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:36:27 +0000   Mon, 29 Apr 2024 19:35:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:36:27 +0000   Mon, 29 Apr 2024 19:35:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:36:27 +0000   Mon, 29 Apr 2024 19:36:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    multinode-773806-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f9ab04a3503d4762af8accf5352b5723
	  System UUID:                f9ab04a3-503d-4762-af8a-ccf5352b5723
	  Boot ID:                    5c25a431-81ac-4f94-9519-b02907883f0a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qw8vg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kindnet-cjpsn              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m52s
	  kube-system                 kube-proxy-bmfbq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m46s                  kube-proxy  
	  Normal  Starting                 38s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m53s (x2 over 6m53s)  kubelet     Node multinode-773806-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m53s (x2 over 6m53s)  kubelet     Node multinode-773806-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m53s (x2 over 6m53s)  kubelet     Node multinode-773806-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m52s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m42s                  kubelet     Node multinode-773806-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  44s (x2 over 44s)      kubelet     Node multinode-773806-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x2 over 44s)      kubelet     Node multinode-773806-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x2 over 44s)      kubelet     Node multinode-773806-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  44s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                34s                    kubelet     Node multinode-773806-m02 status is now: NodeReady
	
	
	Name:               multinode-773806-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-773806-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-773806
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_36_27_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:36:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-773806-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:36:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:36:36 +0000   Mon, 29 Apr 2024 19:36:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:36:36 +0000   Mon, 29 Apr 2024 19:36:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:36:36 +0000   Mon, 29 Apr 2024 19:36:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:36:36 +0000   Mon, 29 Apr 2024 19:36:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    multinode-773806-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 91180b6099a14761b4839b5cfcf1f671
	  System UUID:                91180b60-99a1-4761-b483-9b5cfcf1f671
	  Boot ID:                    4e5334e2-1e28-49b0-9c4f-fb7c6d9dc991
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rfl27       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m5s
	  kube-system                 kube-proxy-p8psp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m59s                  kube-proxy  
	  Normal  Starting                 8s                     kube-proxy  
	  Normal  Starting                 5m16s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m5s (x2 over 6m5s)    kubelet     Node multinode-773806-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x2 over 6m5s)    kubelet     Node multinode-773806-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x2 over 6m5s)    kubelet     Node multinode-773806-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m5s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m55s                  kubelet     Node multinode-773806-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m22s (x2 over 5m22s)  kubelet     Node multinode-773806-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m22s (x2 over 5m22s)  kubelet     Node multinode-773806-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m22s (x2 over 5m22s)  kubelet     Node multinode-773806-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m12s                  kubelet     Node multinode-773806-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  14s (x2 over 14s)      kubelet     Node multinode-773806-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x2 over 14s)      kubelet     Node multinode-773806-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x2 over 14s)      kubelet     Node multinode-773806-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-773806-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.059061] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057718] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.179301] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.128774] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.280448] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.836895] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.063822] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.019398] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +1.165726] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.413637] systemd-fstab-generator[1288]: Ignoring "noauto" option for root device
	[  +0.092118] kauditd_printk_skb: 30 callbacks suppressed
	[Apr29 19:29] systemd-fstab-generator[1486]: Ignoring "noauto" option for root device
	[  +0.117730] kauditd_printk_skb: 21 callbacks suppressed
	[Apr29 19:30] kauditd_printk_skb: 84 callbacks suppressed
	[Apr29 19:35] systemd-fstab-generator[2764]: Ignoring "noauto" option for root device
	[  +0.148396] systemd-fstab-generator[2776]: Ignoring "noauto" option for root device
	[  +0.189737] systemd-fstab-generator[2790]: Ignoring "noauto" option for root device
	[  +0.144477] systemd-fstab-generator[2802]: Ignoring "noauto" option for root device
	[  +0.328715] systemd-fstab-generator[2830]: Ignoring "noauto" option for root device
	[  +0.824564] systemd-fstab-generator[2931]: Ignoring "noauto" option for root device
	[  +1.886606] systemd-fstab-generator[3056]: Ignoring "noauto" option for root device
	[  +5.726576] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.907683] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.675354] systemd-fstab-generator[3870]: Ignoring "noauto" option for root device
	[ +18.232540] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e] <==
	{"level":"info","ts":"2024-04-29T19:29:49.954476Z","caller":"traceutil/trace.go:171","msg":"trace[543404885] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"338.538913ms","start":"2024-04-29T19:29:49.615923Z","end":"2024-04-29T19:29:49.954462Z","steps":["trace[543404885] 'process raft request'  (duration: 136.496496ms)","trace[543404885] 'compare'  (duration: 200.816838ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T19:29:49.954588Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T19:29:49.615907Z","time spent":"338.63447ms","remote":"127.0.0.1:44590","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3214,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-773806-m02\" mod_revision:485 > success:<request_put:<key:\"/registry/minions/multinode-773806-m02\" value_size:3168 >> failure:<request_range:<key:\"/registry/minions/multinode-773806-m02\" > >"}
	{"level":"warn","ts":"2024-04-29T19:29:49.95427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.411976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-04-29T19:29:49.954849Z","caller":"traceutil/trace.go:171","msg":"trace[1069363287] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:487; }","duration":"160.035916ms","start":"2024-04-29T19:29:49.794797Z","end":"2024-04-29T19:29:49.954833Z","steps":["trace[1069363287] 'agreement among raft nodes before linearized reading'  (duration: 159.345504ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T19:29:49.954911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.402519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-773806-m02\" ","response":"range_response_count:1 size:3229"}
	{"level":"info","ts":"2024-04-29T19:29:49.954959Z","caller":"traceutil/trace.go:171","msg":"trace[1492677912] range","detail":"{range_begin:/registry/minions/multinode-773806-m02; range_end:; response_count:1; response_revision:487; }","duration":"115.472448ms","start":"2024-04-29T19:29:49.839477Z","end":"2024-04-29T19:29:49.95495Z","steps":["trace[1492677912] 'agreement among raft nodes before linearized reading'  (duration: 115.334457ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T19:29:50.065403Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.66464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-cjpsn\" ","response":"range_response_count:1 size:4934"}
	{"level":"info","ts":"2024-04-29T19:29:50.065614Z","caller":"traceutil/trace.go:171","msg":"trace[792982758] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-cjpsn; range_end:; response_count:1; response_revision:489; }","duration":"104.891716ms","start":"2024-04-29T19:29:49.960704Z","end":"2024-04-29T19:29:50.065595Z","steps":["trace[792982758] 'agreement among raft nodes before linearized reading'  (duration: 104.631661ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T19:30:35.714549Z","caller":"traceutil/trace.go:171","msg":"trace[882706862] linearizableReadLoop","detail":"{readStateIndex:621; appliedIndex:619; }","duration":"168.484754ms","start":"2024-04-29T19:30:35.546031Z","end":"2024-04-29T19:30:35.714516Z","steps":["trace[882706862] 'read index received'  (duration: 161.637675ms)","trace[882706862] 'applied index is now lower than readState.Index'  (duration: 6.846415ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T19:30:35.714775Z","caller":"traceutil/trace.go:171","msg":"trace[1624359080] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"255.344807ms","start":"2024-04-29T19:30:35.459415Z","end":"2024-04-29T19:30:35.714759Z","steps":["trace[1624359080] 'process raft request'  (duration: 248.32785ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T19:30:35.717646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.172466ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-773806-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-04-29T19:30:35.717723Z","caller":"traceutil/trace.go:171","msg":"trace[1524988239] range","detail":"{range_begin:/registry/minions/multinode-773806-m03; range_end:; response_count:1; response_revision:588; }","duration":"145.282907ms","start":"2024-04-29T19:30:35.572432Z","end":"2024-04-29T19:30:35.717715Z","steps":["trace[1524988239] 'agreement among raft nodes before linearized reading'  (duration: 145.16778ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T19:30:35.714819Z","caller":"traceutil/trace.go:171","msg":"trace[4052478] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"172.261741ms","start":"2024-04-29T19:30:35.542553Z","end":"2024-04-29T19:30:35.714815Z","steps":["trace[4052478] 'process raft request'  (duration: 171.927263ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T19:30:35.715075Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.979329ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.127\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-04-29T19:30:35.717978Z","caller":"traceutil/trace.go:171","msg":"trace[534659608] range","detail":"{range_begin:/registry/masterleases/192.168.39.127; range_end:; response_count:1; response_revision:588; }","duration":"171.99351ms","start":"2024-04-29T19:30:35.545978Z","end":"2024-04-29T19:30:35.717971Z","steps":["trace[534659608] 'agreement among raft nodes before linearized reading'  (duration: 168.85937ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T19:33:34.889809Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-29T19:33:34.889941Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-773806","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"]}
	{"level":"warn","ts":"2024-04-29T19:33:34.890029Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T19:33:34.890232Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T19:33:34.930388Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.127:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T19:33:34.930493Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.127:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T19:33:34.930875Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9dc5e8b969e9632c","current-leader-member-id":"9dc5e8b969e9632c"}
	{"level":"info","ts":"2024-04-29T19:33:34.936589Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-04-29T19:33:34.936748Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-04-29T19:33:34.936825Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-773806","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"]}
	
	
	==> etcd [dd171b7365ef28c752b6dbfa8eeb2824617f2c787b80af5ed48d968ff20b759d] <==
	{"level":"info","ts":"2024-04-29T19:35:11.419212Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:35:11.41932Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:35:11.419593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c switched to configuration voters=(11368748717410181932)"}
	{"level":"info","ts":"2024-04-29T19:35:11.419678Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","added-peer-id":"9dc5e8b969e9632c","added-peer-peer-urls":["https://192.168.39.127:2380"]}
	{"level":"info","ts":"2024-04-29T19:35:11.419843Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:35:11.421276Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:35:11.434814Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T19:35:11.441423Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-04-29T19:35:11.443207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-04-29T19:35:11.451326Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9dc5e8b969e9632c","initial-advertise-peer-urls":["https://192.168.39.127:2380"],"listen-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.127:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:35:11.451514Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T19:35:13.182731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T19:35:13.182774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T19:35:13.182818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c received MsgPreVoteResp from 9dc5e8b969e9632c at term 2"}
	{"level":"info","ts":"2024-04-29T19:35:13.182833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T19:35:13.182848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c received MsgVoteResp from 9dc5e8b969e9632c at term 3"}
	{"level":"info","ts":"2024-04-29T19:35:13.182857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became leader at term 3"}
	{"level":"info","ts":"2024-04-29T19:35:13.182867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9dc5e8b969e9632c elected leader 9dc5e8b969e9632c at term 3"}
	{"level":"info","ts":"2024-04-29T19:35:13.191517Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9dc5e8b969e9632c","local-member-attributes":"{Name:multinode-773806 ClientURLs:[https://192.168.39.127:2379]}","request-path":"/0/members/9dc5e8b969e9632c/attributes","cluster-id":"367c7cb0db09c3ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:35:13.191528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:35:13.191905Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T19:35:13.191948Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T19:35:13.191983Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:35:13.193861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T19:35:13.193861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.127:2379"}
	
	
	==> kernel <==
	 19:36:40 up 8 min,  0 users,  load average: 0.73, 0.60, 0.30
	Linux multinode-773806 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01] <==
	I0429 19:32:52.056072       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	I0429 19:33:02.071710       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:33:02.071806       1 main.go:227] handling current node
	I0429 19:33:02.071852       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:33:02.071878       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:33:02.072005       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0429 19:33:02.072026       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	I0429 19:33:12.086137       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:33:12.086247       1 main.go:227] handling current node
	I0429 19:33:12.086264       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:33:12.086271       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:33:12.086384       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0429 19:33:12.086419       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	I0429 19:33:22.098690       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:33:22.098737       1 main.go:227] handling current node
	I0429 19:33:22.098749       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:33:22.098756       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:33:22.098885       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0429 19:33:22.098916       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	I0429 19:33:32.113885       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:33:32.113934       1 main.go:227] handling current node
	I0429 19:33:32.113945       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:33:32.113951       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:33:32.114056       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0429 19:33:32.114086       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f44b8ef5992602486837e2ea2c56864636442ed442c246e5a5b9bb93be932e23] <==
	I0429 19:35:56.766085       1 main.go:227] handling current node
	I0429 19:35:56.766102       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0429 19:35:56.766111       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	I0429 19:36:06.779096       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:36:06.779290       1 main.go:227] handling current node
	I0429 19:36:06.779340       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:36:06.779361       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:36:06.779502       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0429 19:36:06.779523       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	I0429 19:36:16.792692       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:36:16.793052       1 main.go:227] handling current node
	I0429 19:36:16.793238       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:36:16.793324       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:36:16.793722       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0429 19:36:16.793785       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	I0429 19:36:26.800481       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:36:26.800584       1 main.go:227] handling current node
	I0429 19:36:26.800620       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:36:26.800645       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:36:36.805446       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:36:36.805537       1 main.go:227] handling current node
	I0429 19:36:36.805560       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:36:36.805578       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:36:36.805697       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0429 19:36:36.805717       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe] <==
	I0429 19:33:34.912436       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 19:33:34.912514       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 19:33:34.912550       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0429 19:33:34.912745       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0429 19:33:34.913331       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0429 19:33:34.913400       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 19:33:34.913499       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0429 19:33:34.913529       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	W0429 19:33:34.913602       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0429 19:33:34.914489       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0429 19:33:34.915862       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0429 19:33:34.915933       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0429 19:33:34.916426       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0429 19:33:34.917339       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	W0429 19:33:34.918392       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.918630       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.922725       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.923481       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.923768       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.923928       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.924701       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.924780       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.925133       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.915021       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.925549       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [312d2cc38cb7921577370967c3e1f1355c1f3e19a6e1ebea1e5999e69c8051c0] <==
	I0429 19:35:14.632430       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 19:35:14.634749       1 aggregator.go:165] initial CRD sync complete...
	I0429 19:35:14.634862       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 19:35:14.634973       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 19:35:14.655550       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 19:35:14.655599       1 policy_source.go:224] refreshing policies
	I0429 19:35:14.655813       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 19:35:14.671071       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 19:35:14.671337       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 19:35:14.677400       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 19:35:14.682786       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 19:35:14.682955       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 19:35:14.683577       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 19:35:14.684477       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0429 19:35:14.690624       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0429 19:35:14.698499       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:35:14.759410       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:35:15.491915       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 19:35:17.098536       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 19:35:17.253533       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 19:35:17.296344       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 19:35:17.380715       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 19:35:17.387838       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 19:35:27.352767       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 19:35:27.426080       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [158117fc5586ddd5f255b607d0890364bb2620e5f780e3a30ca08d378dd8fe43] <==
	I0429 19:35:28.028272       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:35:28.059962       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:35:28.060047       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 19:35:52.246373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.02948ms"
	I0429 19:35:52.259529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.100833ms"
	I0429 19:35:52.259947       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.164µs"
	I0429 19:35:56.849049       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-773806-m02\" does not exist"
	I0429 19:35:56.864536       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-773806-m02" podCIDRs=["10.244.1.0/24"]
	I0429 19:35:57.899466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.873µs"
	I0429 19:35:58.742642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.852µs"
	I0429 19:35:58.789030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.567µs"
	I0429 19:35:58.798769       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.03µs"
	I0429 19:35:58.813459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.372µs"
	I0429 19:35:58.826577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.648µs"
	I0429 19:35:58.832108       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.524µs"
	I0429 19:36:06.216064       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:36:06.242696       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="459.802µs"
	I0429 19:36:06.256687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.462µs"
	I0429 19:36:09.225126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.069923ms"
	I0429 19:36:09.226462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.196µs"
	I0429 19:36:25.829616       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:36:26.980800       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:36:26.980816       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-773806-m03\" does not exist"
	I0429 19:36:26.999619       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-773806-m03" podCIDRs=["10.244.2.0/24"]
	I0429 19:36:36.574641       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	
	
	==> kube-controller-manager [bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d] <==
	I0429 19:29:48.079013       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-773806-m02\" does not exist"
	I0429 19:29:48.092457       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-773806-m02" podCIDRs=["10.244.1.0/24"]
	I0429 19:29:49.523600       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-773806-m02"
	I0429 19:29:58.486572       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:30:00.961583       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.641974ms"
	I0429 19:30:00.987973       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.220571ms"
	I0429 19:30:00.998375       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.329965ms"
	I0429 19:30:00.998513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.812µs"
	I0429 19:30:04.826336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.298573ms"
	I0429 19:30:04.826897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.798µs"
	I0429 19:30:05.026571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.782267ms"
	I0429 19:30:05.026834       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.567µs"
	I0429 19:30:35.718088       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:30:35.719810       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-773806-m03\" does not exist"
	I0429 19:30:35.735009       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-773806-m03" podCIDRs=["10.244.2.0/24"]
	I0429 19:30:39.545973       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-773806-m03"
	I0429 19:30:45.816427       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:31:17.540512       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:31:18.616392       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:31:18.616572       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-773806-m03\" does not exist"
	I0429 19:31:18.654272       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-773806-m03" podCIDRs=["10.244.3.0/24"]
	I0429 19:31:28.256819       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:32:09.595354       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:32:14.696698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.635458ms"
	I0429 19:32:14.698456       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.418µs"
	
	
	==> kube-proxy [305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35] <==
	I0429 19:29:11.037839       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:29:11.062950       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.127"]
	I0429 19:29:11.155698       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:29:11.155726       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:29:11.155741       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:29:11.159917       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:29:11.160444       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:29:11.160636       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:29:11.161960       1 config.go:192] "Starting service config controller"
	I0429 19:29:11.162070       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:29:11.162823       1 config.go:319] "Starting node config controller"
	I0429 19:29:11.163058       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:29:11.163090       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:29:11.165705       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:29:11.263949       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:29:11.264006       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:29:11.265834       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [a23b20c1a1888c715e25c28dfd27a4f61f8d433f9e836b9c39c6ca7f3ca0e7e8] <==
	I0429 19:35:16.067080       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:35:16.088253       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.127"]
	I0429 19:35:16.228297       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:35:16.228359       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:35:16.228377       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:35:16.234532       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:35:16.234730       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:35:16.234775       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:35:16.236289       1 config.go:192] "Starting service config controller"
	I0429 19:35:16.236373       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:35:16.236428       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:35:16.236433       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:35:16.236838       1 config.go:319] "Starting node config controller"
	I0429 19:35:16.237112       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:35:16.336970       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:35:16.337105       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:35:16.337370       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [27f524cad554a80a5d6a27ba6563ea8c8f621a795a1c50623338c8fe8a4115da] <==
	I0429 19:35:14.604701       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 19:35:14.604864       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:35:14.613727       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 19:35:14.616316       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 19:35:14.617218       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 19:35:14.617281       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0429 19:35:14.633501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 19:35:14.633564       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 19:35:14.633677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:35:14.633715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:35:14.633758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 19:35:14.633766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 19:35:14.633821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 19:35:14.633857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 19:35:14.636481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:35:14.636528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:35:14.636584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 19:35:14.636622       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 19:35:14.636671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 19:35:14.636681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 19:35:14.636717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 19:35:14.636754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 19:35:14.636786       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 19:35:14.636821       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 19:35:14.717242       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7] <==
	E0429 19:28:53.846992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 19:28:53.847834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:28:53.847873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 19:28:53.847884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:28:53.848047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 19:28:54.671030       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 19:28:54.671095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 19:28:54.725886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 19:28:54.725954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 19:28:54.782936       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 19:28:54.783067       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 19:28:54.790565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:28:54.790658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:28:54.879863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 19:28:54.880068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 19:28:54.901050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:28:54.901141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:28:55.127613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 19:28:55.127867       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 19:28:55.150265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 19:28:55.150439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 19:28:55.177683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 19:28:55.179448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0429 19:28:57.638542       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 19:33:34.882425       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 29 19:35:10 multinode-773806 kubelet[3064]: W0429 19:35:10.923655    3064 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.127:8443: connect: connection refused
	Apr 29 19:35:10 multinode-773806 kubelet[3064]: E0429 19:35:10.923717    3064 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.127:8443: connect: connection refused
	Apr 29 19:35:11 multinode-773806 kubelet[3064]: I0429 19:35:11.535030    3064 kubelet_node_status.go:73] "Attempting to register node" node="multinode-773806"
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.752087    3064 kubelet_node_status.go:112] "Node was previously registered" node="multinode-773806"
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.752577    3064 kubelet_node_status.go:76] "Successfully registered node" node="multinode-773806"
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.753526    3064 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.754580    3064 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.989335    3064 apiserver.go:52] "Watching apiserver"
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.993249    3064 topology_manager.go:215] "Topology Admit Handler" podUID="916bfb3a-8ecd-470b-9ae4-615beffd9990" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vdv7z"
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.993463    3064 topology_manager.go:215] "Topology Admit Handler" podUID="6f195859-a11d-4707-b0e8-92b7164c397d" podNamespace="kube-system" podName="kindnet-vdl58"
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.993573    3064 topology_manager.go:215] "Topology Admit Handler" podUID="ca6e7675-8035-4977-9d13-512c5d336ba7" podNamespace="kube-system" podName="kube-proxy-vfsvr"
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.993654    3064 topology_manager.go:215] "Topology Admit Handler" podUID="a28cf547-261c-4662-bd9c-4966ca3cdfd1" podNamespace="kube-system" podName="storage-provisioner"
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.993722    3064 topology_manager.go:215] "Topology Admit Handler" podUID="c4e08525-845b-423c-8481-20addac1f5e7" podNamespace="default" podName="busybox-fc5497c4f-b9pvl"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.006923    3064 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.086482    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca6e7675-8035-4977-9d13-512c5d336ba7-xtables-lock\") pod \"kube-proxy-vfsvr\" (UID: \"ca6e7675-8035-4977-9d13-512c5d336ba7\") " pod="kube-system/kube-proxy-vfsvr"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.086612    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a28cf547-261c-4662-bd9c-4966ca3cdfd1-tmp\") pod \"storage-provisioner\" (UID: \"a28cf547-261c-4662-bd9c-4966ca3cdfd1\") " pod="kube-system/storage-provisioner"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.086678    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6f195859-a11d-4707-b0e8-92b7164c397d-cni-cfg\") pod \"kindnet-vdl58\" (UID: \"6f195859-a11d-4707-b0e8-92b7164c397d\") " pod="kube-system/kindnet-vdl58"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.086755    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f195859-a11d-4707-b0e8-92b7164c397d-xtables-lock\") pod \"kindnet-vdl58\" (UID: \"6f195859-a11d-4707-b0e8-92b7164c397d\") " pod="kube-system/kindnet-vdl58"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.086839    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f195859-a11d-4707-b0e8-92b7164c397d-lib-modules\") pod \"kindnet-vdl58\" (UID: \"6f195859-a11d-4707-b0e8-92b7164c397d\") " pod="kube-system/kindnet-vdl58"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.086973    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca6e7675-8035-4977-9d13-512c5d336ba7-lib-modules\") pod \"kube-proxy-vfsvr\" (UID: \"ca6e7675-8035-4977-9d13-512c5d336ba7\") " pod="kube-system/kube-proxy-vfsvr"
	Apr 29 19:36:10 multinode-773806 kubelet[3064]: E0429 19:36:10.116518    3064 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:36:10 multinode-773806 kubelet[3064]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:36:10 multinode-773806 kubelet[3064]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:36:10 multinode-773806 kubelet[3064]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:36:10 multinode-773806 kubelet[3064]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 19:36:39.282039   50255 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18774-7754/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-773806 -n multinode-773806
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-773806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (310.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 stop
E0429 19:37:48.917403   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-773806 stop: exit status 82 (2m0.475504874s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-773806-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-773806 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 status
E0429 19:39:00.893785   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-773806 status: exit status 3 (18.667846104s)

                                                
                                                
-- stdout --
	multinode-773806
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-773806-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 19:39:02.762417   50922 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host
	E0429 19:39:02.762453   50922 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.211:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-773806 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-773806 -n multinode-773806
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-773806 logs -n 25: (1.67947278s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp multinode-773806-m02:/home/docker/cp-test.txt                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806:/home/docker/cp-test_multinode-773806-m02_multinode-773806.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n multinode-773806 sudo cat                                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /home/docker/cp-test_multinode-773806-m02_multinode-773806.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp multinode-773806-m02:/home/docker/cp-test.txt                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03:/home/docker/cp-test_multinode-773806-m02_multinode-773806-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n multinode-773806-m03 sudo cat                                   | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /home/docker/cp-test_multinode-773806-m02_multinode-773806-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp testdata/cp-test.txt                                                | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp multinode-773806-m03:/home/docker/cp-test.txt                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1658952582/001/cp-test_multinode-773806-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp multinode-773806-m03:/home/docker/cp-test.txt                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806:/home/docker/cp-test_multinode-773806-m03_multinode-773806.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n multinode-773806 sudo cat                                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /home/docker/cp-test_multinode-773806-m03_multinode-773806.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-773806 cp multinode-773806-m03:/home/docker/cp-test.txt                       | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m02:/home/docker/cp-test_multinode-773806-m03_multinode-773806-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n multinode-773806-m02 sudo cat                                   | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /home/docker/cp-test_multinode-773806-m03_multinode-773806-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-773806 node stop m03                                                          | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	| node    | multinode-773806 node start                                                             | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:31 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-773806                                                                | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:31 UTC |                     |
	| stop    | -p multinode-773806                                                                     | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:31 UTC |                     |
	| start   | -p multinode-773806                                                                     | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:33 UTC | 29 Apr 24 19:36 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-773806                                                                | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:36 UTC |                     |
	| node    | multinode-773806 node delete                                                            | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:36 UTC | 29 Apr 24 19:36 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-773806 stop                                                                   | multinode-773806 | jenkins | v1.33.0 | 29 Apr 24 19:36 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 19:33:33
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 19:33:33.991835   49175 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:33:33.991967   49175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:33:33.991979   49175 out.go:304] Setting ErrFile to fd 2...
	I0429 19:33:33.991986   49175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:33:33.992183   49175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:33:33.992796   49175 out.go:298] Setting JSON to false
	I0429 19:33:33.993823   49175 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4512,"bootTime":1714414702,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 19:33:33.993885   49175 start.go:139] virtualization: kvm guest
	I0429 19:33:33.996516   49175 out.go:177] * [multinode-773806] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 19:33:33.998436   49175 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:33:33.999986   49175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:33:33.998372   49175 notify.go:220] Checking for updates...
	I0429 19:33:34.001887   49175 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:33:34.003625   49175 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:33:34.005188   49175 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 19:33:34.006717   49175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:33:34.008541   49175 config.go:182] Loaded profile config "multinode-773806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:33:34.008659   49175 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:33:34.009250   49175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:33:34.009304   49175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:33:34.024374   49175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39369
	I0429 19:33:34.024873   49175 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:33:34.025396   49175 main.go:141] libmachine: Using API Version  1
	I0429 19:33:34.025420   49175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:33:34.025797   49175 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:33:34.026004   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:33:34.063449   49175 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 19:33:34.064620   49175 start.go:297] selected driver: kvm2
	I0429 19:33:34.064638   49175 start.go:901] validating driver "kvm2" against &{Name:multinode-773806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-773806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:33:34.064776   49175 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:33:34.065110   49175 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:33:34.065176   49175 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 19:33:34.080178   49175 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 19:33:34.080798   49175 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:33:34.080849   49175 cni.go:84] Creating CNI manager for ""
	I0429 19:33:34.080859   49175 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 19:33:34.080923   49175 start.go:340] cluster config:
	{Name:multinode-773806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-773806 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:33:34.081040   49175 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:33:34.083663   49175 out.go:177] * Starting "multinode-773806" primary control-plane node in "multinode-773806" cluster
	I0429 19:33:34.084954   49175 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:33:34.084991   49175 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 19:33:34.084998   49175 cache.go:56] Caching tarball of preloaded images
	I0429 19:33:34.085083   49175 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 19:33:34.085095   49175 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 19:33:34.085210   49175 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/config.json ...
	I0429 19:33:34.085383   49175 start.go:360] acquireMachinesLock for multinode-773806: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:33:34.085423   49175 start.go:364] duration metric: took 23.863µs to acquireMachinesLock for "multinode-773806"
	I0429 19:33:34.085444   49175 start.go:96] Skipping create...Using existing machine configuration
	I0429 19:33:34.085452   49175 fix.go:54] fixHost starting: 
	I0429 19:33:34.085710   49175 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:33:34.085743   49175 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:33:34.100203   49175 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0429 19:33:34.100662   49175 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:33:34.101111   49175 main.go:141] libmachine: Using API Version  1
	I0429 19:33:34.101133   49175 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:33:34.101443   49175 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:33:34.101625   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:33:34.101798   49175 main.go:141] libmachine: (multinode-773806) Calling .GetState
	I0429 19:33:34.103423   49175 fix.go:112] recreateIfNeeded on multinode-773806: state=Running err=<nil>
	W0429 19:33:34.103457   49175 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 19:33:34.106239   49175 out.go:177] * Updating the running kvm2 "multinode-773806" VM ...
	I0429 19:33:34.107546   49175 machine.go:94] provisionDockerMachine start ...
	I0429 19:33:34.107570   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:33:34.107797   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:33:34.110454   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.110852   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.110881   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.111027   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:33:34.111204   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.111385   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.111508   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:33:34.111668   49175 main.go:141] libmachine: Using SSH client type: native
	I0429 19:33:34.111898   49175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0429 19:33:34.111910   49175 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:33:34.232249   49175 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773806
	
	I0429 19:33:34.232280   49175 main.go:141] libmachine: (multinode-773806) Calling .GetMachineName
	I0429 19:33:34.232526   49175 buildroot.go:166] provisioning hostname "multinode-773806"
	I0429 19:33:34.232552   49175 main.go:141] libmachine: (multinode-773806) Calling .GetMachineName
	I0429 19:33:34.232761   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:33:34.235460   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.235889   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.235918   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.236090   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:33:34.236344   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.236493   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.236636   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:33:34.236771   49175 main.go:141] libmachine: Using SSH client type: native
	I0429 19:33:34.236939   49175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0429 19:33:34.236951   49175 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-773806 && echo "multinode-773806" | sudo tee /etc/hostname
	I0429 19:33:34.373214   49175 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773806
	
	I0429 19:33:34.373250   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:33:34.376042   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.376411   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.376461   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.376623   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:33:34.376780   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.376938   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.377064   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:33:34.377217   49175 main.go:141] libmachine: Using SSH client type: native
	I0429 19:33:34.377375   49175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0429 19:33:34.377390   49175 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-773806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773806/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-773806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:33:34.495374   49175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:33:34.495403   49175 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:33:34.495434   49175 buildroot.go:174] setting up certificates
	I0429 19:33:34.495456   49175 provision.go:84] configureAuth start
	I0429 19:33:34.495474   49175 main.go:141] libmachine: (multinode-773806) Calling .GetMachineName
	I0429 19:33:34.495731   49175 main.go:141] libmachine: (multinode-773806) Calling .GetIP
	I0429 19:33:34.498078   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.498431   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.498458   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.498590   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:33:34.500941   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.501308   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.501331   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.501449   49175 provision.go:143] copyHostCerts
	I0429 19:33:34.501479   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:33:34.501528   49175 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:33:34.501541   49175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:33:34.501618   49175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:33:34.501688   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:33:34.501708   49175 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:33:34.501715   49175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:33:34.501740   49175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:33:34.501821   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:33:34.501845   49175 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:33:34.501852   49175 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:33:34.501873   49175 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:33:34.501913   49175 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.multinode-773806 san=[127.0.0.1 192.168.39.127 localhost minikube multinode-773806]
	I0429 19:33:34.557143   49175 provision.go:177] copyRemoteCerts
	I0429 19:33:34.557188   49175 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:33:34.557207   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:33:34.559456   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.559782   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.559807   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.559954   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:33:34.560125   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.560282   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:33:34.560423   49175 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/multinode-773806/id_rsa Username:docker}
	I0429 19:33:34.653008   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 19:33:34.653080   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:33:34.683856   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 19:33:34.683961   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0429 19:33:34.713972   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 19:33:34.714040   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:33:34.744063   49175 provision.go:87] duration metric: took 248.589098ms to configureAuth
	I0429 19:33:34.744096   49175 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:33:34.744363   49175 config.go:182] Loaded profile config "multinode-773806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:33:34.744434   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:33:34.747207   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.747598   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:33:34.747629   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:33:34.747797   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:33:34.748006   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.748275   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:33:34.748419   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:33:34.748571   49175 main.go:141] libmachine: Using SSH client type: native
	I0429 19:33:34.748796   49175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0429 19:33:34.748814   49175 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:35:05.540872   49175 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:35:05.540905   49175 machine.go:97] duration metric: took 1m31.433345092s to provisionDockerMachine
	I0429 19:35:05.540921   49175 start.go:293] postStartSetup for "multinode-773806" (driver="kvm2")
	I0429 19:35:05.540937   49175 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:35:05.540963   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:35:05.541284   49175 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:35:05.541344   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:35:05.544538   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.544994   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:35:05.545015   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.545153   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:35:05.545350   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:35:05.545514   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:35:05.545644   49175 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/multinode-773806/id_rsa Username:docker}
	I0429 19:35:05.635981   49175 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:35:05.641067   49175 command_runner.go:130] > NAME=Buildroot
	I0429 19:35:05.641089   49175 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0429 19:35:05.641093   49175 command_runner.go:130] > ID=buildroot
	I0429 19:35:05.641098   49175 command_runner.go:130] > VERSION_ID=2023.02.9
	I0429 19:35:05.641107   49175 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0429 19:35:05.641133   49175 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:35:05.641143   49175 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:35:05.641201   49175 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:35:05.641280   49175 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:35:05.641289   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /etc/ssl/certs/151242.pem
	I0429 19:35:05.641362   49175 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:35:05.653065   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:35:05.680006   49175 start.go:296] duration metric: took 139.070949ms for postStartSetup
	I0429 19:35:05.680049   49175 fix.go:56] duration metric: took 1m31.594595333s for fixHost
	I0429 19:35:05.680077   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:35:05.683392   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.683853   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:35:05.683885   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.684078   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:35:05.684255   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:35:05.684452   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:35:05.684618   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:35:05.684800   49175 main.go:141] libmachine: Using SSH client type: native
	I0429 19:35:05.684979   49175 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0429 19:35:05.684991   49175 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:35:05.803708   49175 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714419305.777775377
	
	I0429 19:35:05.803735   49175 fix.go:216] guest clock: 1714419305.777775377
	I0429 19:35:05.803745   49175 fix.go:229] Guest: 2024-04-29 19:35:05.777775377 +0000 UTC Remote: 2024-04-29 19:35:05.680055131 +0000 UTC m=+91.742029303 (delta=97.720246ms)
	I0429 19:35:05.803765   49175 fix.go:200] guest clock delta is within tolerance: 97.720246ms
	I0429 19:35:05.803771   49175 start.go:83] releasing machines lock for "multinode-773806", held for 1m31.718338271s
	I0429 19:35:05.803793   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:35:05.804162   49175 main.go:141] libmachine: (multinode-773806) Calling .GetIP
	I0429 19:35:05.806837   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.807209   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:35:05.807231   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.807430   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:35:05.807936   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:35:05.808113   49175 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:35:05.808224   49175 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:35:05.808263   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:35:05.808389   49175 ssh_runner.go:195] Run: cat /version.json
	I0429 19:35:05.808414   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:35:05.811145   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.811223   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.811643   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:35:05.811693   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.811723   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:35:05.811746   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:05.811884   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:35:05.811962   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:35:05.812052   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:35:05.812112   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:35:05.812171   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:35:05.812314   49175 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/multinode-773806/id_rsa Username:docker}
	I0429 19:35:05.812353   49175 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:35:05.812517   49175 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/multinode-773806/id_rsa Username:docker}
	I0429 19:35:05.917360   49175 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0429 19:35:05.918216   49175 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0429 19:35:05.918365   49175 ssh_runner.go:195] Run: systemctl --version
	I0429 19:35:05.925385   49175 command_runner.go:130] > systemd 252 (252)
	I0429 19:35:05.925428   49175 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0429 19:35:05.925479   49175 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:35:06.100476   49175 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 19:35:06.107367   49175 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0429 19:35:06.107595   49175 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:35:06.107665   49175 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:35:06.118538   49175 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 19:35:06.118561   49175 start.go:494] detecting cgroup driver to use...
	I0429 19:35:06.118615   49175 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:35:06.136934   49175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:35:06.151779   49175 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:35:06.151897   49175 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:35:06.167406   49175 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:35:06.182732   49175 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:35:06.330224   49175 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:35:06.475889   49175 docker.go:233] disabling docker service ...
	I0429 19:35:06.475963   49175 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:35:06.495296   49175 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:35:06.510338   49175 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:35:06.661248   49175 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:35:06.812462   49175 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:35:06.829819   49175 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:35:06.850861   49175 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0429 19:35:06.850912   49175 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 19:35:06.850961   49175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.862802   49175 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:35:06.862857   49175 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.874226   49175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.886079   49175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.897136   49175 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:35:06.909093   49175 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.942885   49175 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.956116   49175 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:35:06.967739   49175 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:35:06.978275   49175 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0429 19:35:06.978345   49175 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:35:06.988613   49175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:35:07.135687   49175 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:35:07.402286   49175 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:35:07.402355   49175 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:35:07.408902   49175 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0429 19:35:07.408923   49175 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0429 19:35:07.408930   49175 command_runner.go:130] > Device: 0,22	Inode: 1329        Links: 1
	I0429 19:35:07.408937   49175 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 19:35:07.408942   49175 command_runner.go:130] > Access: 2024-04-29 19:35:07.259778624 +0000
	I0429 19:35:07.408948   49175 command_runner.go:130] > Modify: 2024-04-29 19:35:07.259778624 +0000
	I0429 19:35:07.408953   49175 command_runner.go:130] > Change: 2024-04-29 19:35:07.259778624 +0000
	I0429 19:35:07.408957   49175 command_runner.go:130] >  Birth: -
	I0429 19:35:07.409048   49175 start.go:562] Will wait 60s for crictl version
	I0429 19:35:07.409112   49175 ssh_runner.go:195] Run: which crictl
	I0429 19:35:07.413559   49175 command_runner.go:130] > /usr/bin/crictl
	I0429 19:35:07.413632   49175 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:35:07.459073   49175 command_runner.go:130] > Version:  0.1.0
	I0429 19:35:07.459096   49175 command_runner.go:130] > RuntimeName:  cri-o
	I0429 19:35:07.459101   49175 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0429 19:35:07.459105   49175 command_runner.go:130] > RuntimeApiVersion:  v1
	I0429 19:35:07.460725   49175 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:35:07.460812   49175 ssh_runner.go:195] Run: crio --version
	I0429 19:35:07.491928   49175 command_runner.go:130] > crio version 1.29.1
	I0429 19:35:07.491952   49175 command_runner.go:130] > Version:        1.29.1
	I0429 19:35:07.491958   49175 command_runner.go:130] > GitCommit:      unknown
	I0429 19:35:07.491962   49175 command_runner.go:130] > GitCommitDate:  unknown
	I0429 19:35:07.491985   49175 command_runner.go:130] > GitTreeState:   clean
	I0429 19:35:07.491991   49175 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0429 19:35:07.491995   49175 command_runner.go:130] > GoVersion:      go1.21.6
	I0429 19:35:07.492000   49175 command_runner.go:130] > Compiler:       gc
	I0429 19:35:07.492004   49175 command_runner.go:130] > Platform:       linux/amd64
	I0429 19:35:07.492008   49175 command_runner.go:130] > Linkmode:       dynamic
	I0429 19:35:07.492012   49175 command_runner.go:130] > BuildTags:      
	I0429 19:35:07.492022   49175 command_runner.go:130] >   containers_image_ostree_stub
	I0429 19:35:07.492026   49175 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0429 19:35:07.492030   49175 command_runner.go:130] >   btrfs_noversion
	I0429 19:35:07.492034   49175 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0429 19:35:07.492038   49175 command_runner.go:130] >   libdm_no_deferred_remove
	I0429 19:35:07.492041   49175 command_runner.go:130] >   seccomp
	I0429 19:35:07.492047   49175 command_runner.go:130] > LDFlags:          unknown
	I0429 19:35:07.492053   49175 command_runner.go:130] > SeccompEnabled:   true
	I0429 19:35:07.492057   49175 command_runner.go:130] > AppArmorEnabled:  false
	I0429 19:35:07.493444   49175 ssh_runner.go:195] Run: crio --version
	I0429 19:35:07.528960   49175 command_runner.go:130] > crio version 1.29.1
	I0429 19:35:07.528994   49175 command_runner.go:130] > Version:        1.29.1
	I0429 19:35:07.529002   49175 command_runner.go:130] > GitCommit:      unknown
	I0429 19:35:07.529009   49175 command_runner.go:130] > GitCommitDate:  unknown
	I0429 19:35:07.529015   49175 command_runner.go:130] > GitTreeState:   clean
	I0429 19:35:07.529024   49175 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0429 19:35:07.529030   49175 command_runner.go:130] > GoVersion:      go1.21.6
	I0429 19:35:07.529037   49175 command_runner.go:130] > Compiler:       gc
	I0429 19:35:07.529043   49175 command_runner.go:130] > Platform:       linux/amd64
	I0429 19:35:07.529050   49175 command_runner.go:130] > Linkmode:       dynamic
	I0429 19:35:07.529058   49175 command_runner.go:130] > BuildTags:      
	I0429 19:35:07.529063   49175 command_runner.go:130] >   containers_image_ostree_stub
	I0429 19:35:07.529068   49175 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0429 19:35:07.529072   49175 command_runner.go:130] >   btrfs_noversion
	I0429 19:35:07.529079   49175 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0429 19:35:07.529084   49175 command_runner.go:130] >   libdm_no_deferred_remove
	I0429 19:35:07.529088   49175 command_runner.go:130] >   seccomp
	I0429 19:35:07.529093   49175 command_runner.go:130] > LDFlags:          unknown
	I0429 19:35:07.529108   49175 command_runner.go:130] > SeccompEnabled:   true
	I0429 19:35:07.529122   49175 command_runner.go:130] > AppArmorEnabled:  false
	I0429 19:35:07.531686   49175 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 19:35:07.533484   49175 main.go:141] libmachine: (multinode-773806) Calling .GetIP
	I0429 19:35:07.536184   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:07.536594   49175 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:35:07.536619   49175 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:35:07.536797   49175 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 19:35:07.541798   49175 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0429 19:35:07.541954   49175 kubeadm.go:877] updating cluster {Name:multinode-773806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-773806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fa
lse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 19:35:07.542118   49175 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:35:07.542174   49175 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:35:07.592377   49175 command_runner.go:130] > {
	I0429 19:35:07.592406   49175 command_runner.go:130] >   "images": [
	I0429 19:35:07.592412   49175 command_runner.go:130] >     {
	I0429 19:35:07.592422   49175 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0429 19:35:07.592429   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.592436   49175 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0429 19:35:07.592441   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592446   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.592457   49175 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0429 19:35:07.592467   49175 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0429 19:35:07.592473   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592480   49175 command_runner.go:130] >       "size": "65291810",
	I0429 19:35:07.592487   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.592497   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.592510   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.592520   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.592526   49175 command_runner.go:130] >     },
	I0429 19:35:07.592532   49175 command_runner.go:130] >     {
	I0429 19:35:07.592546   49175 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0429 19:35:07.592556   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.592566   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0429 19:35:07.592575   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592582   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.592598   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0429 19:35:07.592614   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0429 19:35:07.592623   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592631   49175 command_runner.go:130] >       "size": "1363676",
	I0429 19:35:07.592640   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.592652   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.592662   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.592670   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.592680   49175 command_runner.go:130] >     },
	I0429 19:35:07.592685   49175 command_runner.go:130] >     {
	I0429 19:35:07.592697   49175 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0429 19:35:07.592707   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.592716   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0429 19:35:07.592732   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592743   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.592756   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0429 19:35:07.592773   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0429 19:35:07.592792   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592801   49175 command_runner.go:130] >       "size": "31470524",
	I0429 19:35:07.592809   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.592819   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.592827   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.592836   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.592843   49175 command_runner.go:130] >     },
	I0429 19:35:07.592851   49175 command_runner.go:130] >     {
	I0429 19:35:07.592862   49175 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0429 19:35:07.592872   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.592883   49175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0429 19:35:07.592891   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592898   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.592913   49175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0429 19:35:07.592937   49175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0429 19:35:07.592949   49175 command_runner.go:130] >       ],
	I0429 19:35:07.592956   49175 command_runner.go:130] >       "size": "61245718",
	I0429 19:35:07.592962   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.592970   49175 command_runner.go:130] >       "username": "nonroot",
	I0429 19:35:07.592980   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.592988   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.592996   49175 command_runner.go:130] >     },
	I0429 19:35:07.593003   49175 command_runner.go:130] >     {
	I0429 19:35:07.593014   49175 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0429 19:35:07.593024   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.593034   49175 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0429 19:35:07.593042   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593049   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.593065   49175 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0429 19:35:07.593079   49175 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0429 19:35:07.593088   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593096   49175 command_runner.go:130] >       "size": "150779692",
	I0429 19:35:07.593111   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.593122   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.593130   49175 command_runner.go:130] >       },
	I0429 19:35:07.593137   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.593145   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.593155   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.593161   49175 command_runner.go:130] >     },
	I0429 19:35:07.593171   49175 command_runner.go:130] >     {
	I0429 19:35:07.593183   49175 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0429 19:35:07.593193   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.593203   49175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0429 19:35:07.593211   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593219   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.593234   49175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0429 19:35:07.593249   49175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0429 19:35:07.593259   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593267   49175 command_runner.go:130] >       "size": "117609952",
	I0429 19:35:07.593277   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.593283   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.593288   49175 command_runner.go:130] >       },
	I0429 19:35:07.593296   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.593303   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.593387   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.593405   49175 command_runner.go:130] >     },
	I0429 19:35:07.593411   49175 command_runner.go:130] >     {
	I0429 19:35:07.593422   49175 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0429 19:35:07.593433   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.593443   49175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0429 19:35:07.593452   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593460   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.593478   49175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0429 19:35:07.593494   49175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0429 19:35:07.593503   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593511   49175 command_runner.go:130] >       "size": "112170310",
	I0429 19:35:07.593520   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.593527   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.593557   49175 command_runner.go:130] >       },
	I0429 19:35:07.593567   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.593574   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.593594   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.593604   49175 command_runner.go:130] >     },
	I0429 19:35:07.593611   49175 command_runner.go:130] >     {
	I0429 19:35:07.593624   49175 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0429 19:35:07.593634   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.593648   49175 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0429 19:35:07.593656   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593663   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.593695   49175 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0429 19:35:07.593711   49175 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0429 19:35:07.593721   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593728   49175 command_runner.go:130] >       "size": "85932953",
	I0429 19:35:07.593737   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.593745   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.593755   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.593764   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.593769   49175 command_runner.go:130] >     },
	I0429 19:35:07.593774   49175 command_runner.go:130] >     {
	I0429 19:35:07.593783   49175 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0429 19:35:07.593792   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.593801   49175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0429 19:35:07.593806   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593813   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.593825   49175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0429 19:35:07.593842   49175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0429 19:35:07.593851   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593860   49175 command_runner.go:130] >       "size": "63026502",
	I0429 19:35:07.593870   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.593879   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.593886   49175 command_runner.go:130] >       },
	I0429 19:35:07.593896   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.593916   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.593927   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.593941   49175 command_runner.go:130] >     },
	I0429 19:35:07.593951   49175 command_runner.go:130] >     {
	I0429 19:35:07.593963   49175 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0429 19:35:07.593972   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.593981   49175 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0429 19:35:07.593990   49175 command_runner.go:130] >       ],
	I0429 19:35:07.593998   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.594013   49175 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0429 19:35:07.594028   49175 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0429 19:35:07.594037   49175 command_runner.go:130] >       ],
	I0429 19:35:07.594045   49175 command_runner.go:130] >       "size": "750414",
	I0429 19:35:07.594054   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.594062   49175 command_runner.go:130] >         "value": "65535"
	I0429 19:35:07.594083   49175 command_runner.go:130] >       },
	I0429 19:35:07.594090   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.594100   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.594108   49175 command_runner.go:130] >       "pinned": true
	I0429 19:35:07.594116   49175 command_runner.go:130] >     }
	I0429 19:35:07.594122   49175 command_runner.go:130] >   ]
	I0429 19:35:07.594127   49175 command_runner.go:130] > }
	I0429 19:35:07.594323   49175 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 19:35:07.594337   49175 crio.go:433] Images already preloaded, skipping extraction
	I0429 19:35:07.594394   49175 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:35:07.633745   49175 command_runner.go:130] > {
	I0429 19:35:07.633768   49175 command_runner.go:130] >   "images": [
	I0429 19:35:07.633773   49175 command_runner.go:130] >     {
	I0429 19:35:07.633784   49175 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0429 19:35:07.633791   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.633799   49175 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0429 19:35:07.633804   49175 command_runner.go:130] >       ],
	I0429 19:35:07.633810   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.633822   49175 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0429 19:35:07.633832   49175 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0429 19:35:07.633838   49175 command_runner.go:130] >       ],
	I0429 19:35:07.633845   49175 command_runner.go:130] >       "size": "65291810",
	I0429 19:35:07.633856   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.633864   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.633892   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.633903   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.633911   49175 command_runner.go:130] >     },
	I0429 19:35:07.633916   49175 command_runner.go:130] >     {
	I0429 19:35:07.633928   49175 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0429 19:35:07.633938   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.633949   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0429 19:35:07.633957   49175 command_runner.go:130] >       ],
	I0429 19:35:07.633965   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.633981   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0429 19:35:07.633996   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0429 19:35:07.634005   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634019   49175 command_runner.go:130] >       "size": "1363676",
	I0429 19:35:07.634028   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.634041   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.634050   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.634057   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.634076   49175 command_runner.go:130] >     },
	I0429 19:35:07.634082   49175 command_runner.go:130] >     {
	I0429 19:35:07.634094   49175 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0429 19:35:07.634104   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.634114   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0429 19:35:07.634123   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634129   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.634147   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0429 19:35:07.634163   49175 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0429 19:35:07.634172   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634179   49175 command_runner.go:130] >       "size": "31470524",
	I0429 19:35:07.634190   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.634198   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.634206   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.634217   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.634225   49175 command_runner.go:130] >     },
	I0429 19:35:07.634231   49175 command_runner.go:130] >     {
	I0429 19:35:07.634245   49175 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0429 19:35:07.634256   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.634266   49175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0429 19:35:07.634274   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634280   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.634296   49175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0429 19:35:07.634327   49175 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0429 19:35:07.634336   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634343   49175 command_runner.go:130] >       "size": "61245718",
	I0429 19:35:07.634349   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.634360   49175 command_runner.go:130] >       "username": "nonroot",
	I0429 19:35:07.634371   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.634379   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.634388   49175 command_runner.go:130] >     },
	I0429 19:35:07.634404   49175 command_runner.go:130] >     {
	I0429 19:35:07.634418   49175 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0429 19:35:07.634428   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.634439   49175 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0429 19:35:07.634447   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634454   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.634466   49175 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0429 19:35:07.634481   49175 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0429 19:35:07.634490   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634497   49175 command_runner.go:130] >       "size": "150779692",
	I0429 19:35:07.634506   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.634513   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.634522   49175 command_runner.go:130] >       },
	I0429 19:35:07.634530   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.634539   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.634546   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.634554   49175 command_runner.go:130] >     },
	I0429 19:35:07.634560   49175 command_runner.go:130] >     {
	I0429 19:35:07.634571   49175 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0429 19:35:07.634581   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.634593   49175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0429 19:35:07.634600   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634608   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.634624   49175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0429 19:35:07.634639   49175 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0429 19:35:07.634647   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634655   49175 command_runner.go:130] >       "size": "117609952",
	I0429 19:35:07.634665   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.634673   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.634679   49175 command_runner.go:130] >       },
	I0429 19:35:07.634688   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.634698   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.634706   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.634719   49175 command_runner.go:130] >     },
	I0429 19:35:07.634728   49175 command_runner.go:130] >     {
	I0429 19:35:07.634739   49175 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0429 19:35:07.634755   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.634768   49175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0429 19:35:07.634777   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634784   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.634800   49175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0429 19:35:07.634816   49175 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0429 19:35:07.634829   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634840   49175 command_runner.go:130] >       "size": "112170310",
	I0429 19:35:07.634847   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.634854   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.634861   49175 command_runner.go:130] >       },
	I0429 19:35:07.634869   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.634875   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.634882   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.634888   49175 command_runner.go:130] >     },
	I0429 19:35:07.634895   49175 command_runner.go:130] >     {
	I0429 19:35:07.634907   49175 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0429 19:35:07.634917   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.634926   49175 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0429 19:35:07.634935   49175 command_runner.go:130] >       ],
	I0429 19:35:07.634943   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.634975   49175 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0429 19:35:07.634991   49175 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0429 19:35:07.634998   49175 command_runner.go:130] >       ],
	I0429 19:35:07.635008   49175 command_runner.go:130] >       "size": "85932953",
	I0429 19:35:07.635016   49175 command_runner.go:130] >       "uid": null,
	I0429 19:35:07.635026   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.635034   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.635043   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.635049   49175 command_runner.go:130] >     },
	I0429 19:35:07.635055   49175 command_runner.go:130] >     {
	I0429 19:35:07.635066   49175 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0429 19:35:07.635076   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.635085   49175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0429 19:35:07.635093   49175 command_runner.go:130] >       ],
	I0429 19:35:07.635101   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.635123   49175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0429 19:35:07.635139   49175 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0429 19:35:07.635163   49175 command_runner.go:130] >       ],
	I0429 19:35:07.635173   49175 command_runner.go:130] >       "size": "63026502",
	I0429 19:35:07.635179   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.635184   49175 command_runner.go:130] >         "value": "0"
	I0429 19:35:07.635190   49175 command_runner.go:130] >       },
	I0429 19:35:07.635198   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.635207   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.635214   49175 command_runner.go:130] >       "pinned": false
	I0429 19:35:07.635223   49175 command_runner.go:130] >     },
	I0429 19:35:07.635229   49175 command_runner.go:130] >     {
	I0429 19:35:07.635242   49175 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0429 19:35:07.635251   49175 command_runner.go:130] >       "repoTags": [
	I0429 19:35:07.635260   49175 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0429 19:35:07.635269   49175 command_runner.go:130] >       ],
	I0429 19:35:07.635276   49175 command_runner.go:130] >       "repoDigests": [
	I0429 19:35:07.635292   49175 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0429 19:35:07.635315   49175 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0429 19:35:07.635324   49175 command_runner.go:130] >       ],
	I0429 19:35:07.635333   49175 command_runner.go:130] >       "size": "750414",
	I0429 19:35:07.635341   49175 command_runner.go:130] >       "uid": {
	I0429 19:35:07.635349   49175 command_runner.go:130] >         "value": "65535"
	I0429 19:35:07.635358   49175 command_runner.go:130] >       },
	I0429 19:35:07.635365   49175 command_runner.go:130] >       "username": "",
	I0429 19:35:07.635374   49175 command_runner.go:130] >       "spec": null,
	I0429 19:35:07.635382   49175 command_runner.go:130] >       "pinned": true
	I0429 19:35:07.635390   49175 command_runner.go:130] >     }
	I0429 19:35:07.635395   49175 command_runner.go:130] >   ]
	I0429 19:35:07.635400   49175 command_runner.go:130] > }
	I0429 19:35:07.635544   49175 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 19:35:07.635558   49175 cache_images.go:84] Images are preloaded, skipping loading
	I0429 19:35:07.635568   49175 kubeadm.go:928] updating node { 192.168.39.127 8443 v1.30.0 crio true true} ...
	I0429 19:35:07.635709   49175 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-773806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-773806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:35:07.635790   49175 ssh_runner.go:195] Run: crio config
	I0429 19:35:07.675183   49175 command_runner.go:130] ! time="2024-04-29 19:35:07.649353080Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0429 19:35:07.682393   49175 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0429 19:35:07.689141   49175 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0429 19:35:07.689163   49175 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0429 19:35:07.689170   49175 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0429 19:35:07.689173   49175 command_runner.go:130] > #
	I0429 19:35:07.689179   49175 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0429 19:35:07.689185   49175 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0429 19:35:07.689191   49175 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0429 19:35:07.689199   49175 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0429 19:35:07.689202   49175 command_runner.go:130] > # reload'.
	I0429 19:35:07.689208   49175 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0429 19:35:07.689214   49175 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0429 19:35:07.689221   49175 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0429 19:35:07.689236   49175 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0429 19:35:07.689243   49175 command_runner.go:130] > [crio]
	I0429 19:35:07.689249   49175 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0429 19:35:07.689257   49175 command_runner.go:130] > # containers images, in this directory.
	I0429 19:35:07.689262   49175 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0429 19:35:07.689272   49175 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0429 19:35:07.689289   49175 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0429 19:35:07.689297   49175 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0429 19:35:07.689301   49175 command_runner.go:130] > # imagestore = ""
	I0429 19:35:07.689308   49175 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0429 19:35:07.689314   49175 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0429 19:35:07.689321   49175 command_runner.go:130] > storage_driver = "overlay"
	I0429 19:35:07.689326   49175 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0429 19:35:07.689333   49175 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0429 19:35:07.689337   49175 command_runner.go:130] > storage_option = [
	I0429 19:35:07.689344   49175 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0429 19:35:07.689347   49175 command_runner.go:130] > ]
	I0429 19:35:07.689356   49175 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0429 19:35:07.689364   49175 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0429 19:35:07.689369   49175 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0429 19:35:07.689374   49175 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0429 19:35:07.689382   49175 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0429 19:35:07.689389   49175 command_runner.go:130] > # always happen on a node reboot
	I0429 19:35:07.689394   49175 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0429 19:35:07.689407   49175 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0429 19:35:07.689416   49175 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0429 19:35:07.689421   49175 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0429 19:35:07.689428   49175 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0429 19:35:07.689435   49175 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0429 19:35:07.689445   49175 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0429 19:35:07.689451   49175 command_runner.go:130] > # internal_wipe = true
	I0429 19:35:07.689464   49175 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0429 19:35:07.689472   49175 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0429 19:35:07.689478   49175 command_runner.go:130] > # internal_repair = false
	I0429 19:35:07.689483   49175 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0429 19:35:07.689491   49175 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0429 19:35:07.689503   49175 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0429 19:35:07.689511   49175 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0429 19:35:07.689519   49175 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0429 19:35:07.689526   49175 command_runner.go:130] > [crio.api]
	I0429 19:35:07.689530   49175 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0429 19:35:07.689537   49175 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0429 19:35:07.689542   49175 command_runner.go:130] > # IP address on which the stream server will listen.
	I0429 19:35:07.689549   49175 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0429 19:35:07.689555   49175 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0429 19:35:07.689562   49175 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0429 19:35:07.689566   49175 command_runner.go:130] > # stream_port = "0"
	I0429 19:35:07.689578   49175 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0429 19:35:07.689585   49175 command_runner.go:130] > # stream_enable_tls = false
	I0429 19:35:07.689591   49175 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0429 19:35:07.689598   49175 command_runner.go:130] > # stream_idle_timeout = ""
	I0429 19:35:07.689603   49175 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0429 19:35:07.689611   49175 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0429 19:35:07.689615   49175 command_runner.go:130] > # minutes.
	I0429 19:35:07.689621   49175 command_runner.go:130] > # stream_tls_cert = ""
	I0429 19:35:07.689627   49175 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0429 19:35:07.689641   49175 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0429 19:35:07.689647   49175 command_runner.go:130] > # stream_tls_key = ""
	I0429 19:35:07.689653   49175 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0429 19:35:07.689659   49175 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0429 19:35:07.689676   49175 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0429 19:35:07.689689   49175 command_runner.go:130] > # stream_tls_ca = ""
	I0429 19:35:07.689696   49175 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0429 19:35:07.689700   49175 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0429 19:35:07.689708   49175 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0429 19:35:07.689715   49175 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0429 19:35:07.689722   49175 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0429 19:35:07.689729   49175 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0429 19:35:07.689733   49175 command_runner.go:130] > [crio.runtime]
	I0429 19:35:07.689741   49175 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0429 19:35:07.689749   49175 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0429 19:35:07.689755   49175 command_runner.go:130] > # "nofile=1024:2048"
	I0429 19:35:07.689772   49175 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0429 19:35:07.689778   49175 command_runner.go:130] > # default_ulimits = [
	I0429 19:35:07.689781   49175 command_runner.go:130] > # ]
	I0429 19:35:07.689787   49175 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0429 19:35:07.689793   49175 command_runner.go:130] > # no_pivot = false
	I0429 19:35:07.689801   49175 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0429 19:35:07.689809   49175 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0429 19:35:07.689815   49175 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0429 19:35:07.689826   49175 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0429 19:35:07.689834   49175 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0429 19:35:07.689841   49175 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0429 19:35:07.689848   49175 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0429 19:35:07.689852   49175 command_runner.go:130] > # Cgroup setting for conmon
	I0429 19:35:07.689861   49175 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0429 19:35:07.689867   49175 command_runner.go:130] > conmon_cgroup = "pod"
	I0429 19:35:07.689873   49175 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0429 19:35:07.689880   49175 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0429 19:35:07.689887   49175 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0429 19:35:07.689893   49175 command_runner.go:130] > conmon_env = [
	I0429 19:35:07.689899   49175 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0429 19:35:07.689904   49175 command_runner.go:130] > ]
	I0429 19:35:07.689909   49175 command_runner.go:130] > # Additional environment variables to set for all the
	I0429 19:35:07.689916   49175 command_runner.go:130] > # containers. These are overridden if set in the
	I0429 19:35:07.689921   49175 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0429 19:35:07.689928   49175 command_runner.go:130] > # default_env = [
	I0429 19:35:07.689931   49175 command_runner.go:130] > # ]
	I0429 19:35:07.689939   49175 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0429 19:35:07.689946   49175 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0429 19:35:07.689952   49175 command_runner.go:130] > # selinux = false
	I0429 19:35:07.689958   49175 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0429 19:35:07.689966   49175 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0429 19:35:07.689974   49175 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0429 19:35:07.689980   49175 command_runner.go:130] > # seccomp_profile = ""
	I0429 19:35:07.689985   49175 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0429 19:35:07.689993   49175 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0429 19:35:07.689998   49175 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0429 19:35:07.690010   49175 command_runner.go:130] > # which might increase security.
	I0429 19:35:07.690017   49175 command_runner.go:130] > # This option is currently deprecated,
	I0429 19:35:07.690023   49175 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0429 19:35:07.690030   49175 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0429 19:35:07.690036   49175 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0429 19:35:07.690044   49175 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0429 19:35:07.690054   49175 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0429 19:35:07.690062   49175 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0429 19:35:07.690088   49175 command_runner.go:130] > # This option supports live configuration reload.
	I0429 19:35:07.690099   49175 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0429 19:35:07.690110   49175 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0429 19:35:07.690118   49175 command_runner.go:130] > # the cgroup blockio controller.
	I0429 19:35:07.690122   49175 command_runner.go:130] > # blockio_config_file = ""
	I0429 19:35:07.690131   49175 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0429 19:35:07.690139   49175 command_runner.go:130] > # blockio parameters.
	I0429 19:35:07.690143   49175 command_runner.go:130] > # blockio_reload = false
	I0429 19:35:07.690150   49175 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0429 19:35:07.690157   49175 command_runner.go:130] > # irqbalance daemon.
	I0429 19:35:07.690162   49175 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0429 19:35:07.690170   49175 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0429 19:35:07.690177   49175 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0429 19:35:07.690186   49175 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0429 19:35:07.690195   49175 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0429 19:35:07.690201   49175 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0429 19:35:07.690208   49175 command_runner.go:130] > # This option supports live configuration reload.
	I0429 19:35:07.690212   49175 command_runner.go:130] > # rdt_config_file = ""
	I0429 19:35:07.690220   49175 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0429 19:35:07.690224   49175 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0429 19:35:07.690278   49175 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0429 19:35:07.690289   49175 command_runner.go:130] > # separate_pull_cgroup = ""
	I0429 19:35:07.690295   49175 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0429 19:35:07.690301   49175 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0429 19:35:07.690307   49175 command_runner.go:130] > # will be added.
	I0429 19:35:07.690312   49175 command_runner.go:130] > # default_capabilities = [
	I0429 19:35:07.690318   49175 command_runner.go:130] > # 	"CHOWN",
	I0429 19:35:07.690322   49175 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0429 19:35:07.690331   49175 command_runner.go:130] > # 	"FSETID",
	I0429 19:35:07.690337   49175 command_runner.go:130] > # 	"FOWNER",
	I0429 19:35:07.690341   49175 command_runner.go:130] > # 	"SETGID",
	I0429 19:35:07.690347   49175 command_runner.go:130] > # 	"SETUID",
	I0429 19:35:07.690351   49175 command_runner.go:130] > # 	"SETPCAP",
	I0429 19:35:07.690357   49175 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0429 19:35:07.690360   49175 command_runner.go:130] > # 	"KILL",
	I0429 19:35:07.690366   49175 command_runner.go:130] > # ]
	I0429 19:35:07.690374   49175 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0429 19:35:07.690382   49175 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0429 19:35:07.690390   49175 command_runner.go:130] > # add_inheritable_capabilities = false
	I0429 19:35:07.690398   49175 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0429 19:35:07.690406   49175 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0429 19:35:07.690410   49175 command_runner.go:130] > default_sysctls = [
	I0429 19:35:07.690417   49175 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0429 19:35:07.690421   49175 command_runner.go:130] > ]
	I0429 19:35:07.690427   49175 command_runner.go:130] > # List of devices on the host that a
	I0429 19:35:07.690433   49175 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0429 19:35:07.690440   49175 command_runner.go:130] > # allowed_devices = [
	I0429 19:35:07.690444   49175 command_runner.go:130] > # 	"/dev/fuse",
	I0429 19:35:07.690450   49175 command_runner.go:130] > # ]
	I0429 19:35:07.690458   49175 command_runner.go:130] > # List of additional devices. specified as
	I0429 19:35:07.690467   49175 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0429 19:35:07.690475   49175 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0429 19:35:07.690483   49175 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0429 19:35:07.690489   49175 command_runner.go:130] > # additional_devices = [
	I0429 19:35:07.690492   49175 command_runner.go:130] > # ]
	I0429 19:35:07.690499   49175 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0429 19:35:07.690507   49175 command_runner.go:130] > # cdi_spec_dirs = [
	I0429 19:35:07.690513   49175 command_runner.go:130] > # 	"/etc/cdi",
	I0429 19:35:07.690518   49175 command_runner.go:130] > # 	"/var/run/cdi",
	I0429 19:35:07.690523   49175 command_runner.go:130] > # ]
	I0429 19:35:07.690529   49175 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0429 19:35:07.690537   49175 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0429 19:35:07.690544   49175 command_runner.go:130] > # Defaults to false.
	I0429 19:35:07.690549   49175 command_runner.go:130] > # device_ownership_from_security_context = false
	I0429 19:35:07.690562   49175 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0429 19:35:07.690571   49175 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0429 19:35:07.690582   49175 command_runner.go:130] > # hooks_dir = [
	I0429 19:35:07.690588   49175 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0429 19:35:07.690592   49175 command_runner.go:130] > # ]
	I0429 19:35:07.690597   49175 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0429 19:35:07.690605   49175 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0429 19:35:07.690611   49175 command_runner.go:130] > # its default mounts from the following two files:
	I0429 19:35:07.690617   49175 command_runner.go:130] > #
	I0429 19:35:07.690623   49175 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0429 19:35:07.690641   49175 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0429 19:35:07.690649   49175 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0429 19:35:07.690653   49175 command_runner.go:130] > #
	I0429 19:35:07.690659   49175 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0429 19:35:07.690667   49175 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0429 19:35:07.690675   49175 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0429 19:35:07.690681   49175 command_runner.go:130] > #      only add mounts it finds in this file.
	I0429 19:35:07.690687   49175 command_runner.go:130] > #
	I0429 19:35:07.690691   49175 command_runner.go:130] > # default_mounts_file = ""
	I0429 19:35:07.690698   49175 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0429 19:35:07.690706   49175 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0429 19:35:07.690712   49175 command_runner.go:130] > pids_limit = 1024
	I0429 19:35:07.690718   49175 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0429 19:35:07.690726   49175 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0429 19:35:07.690733   49175 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0429 19:35:07.690743   49175 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0429 19:35:07.690750   49175 command_runner.go:130] > # log_size_max = -1
	I0429 19:35:07.690757   49175 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0429 19:35:07.690763   49175 command_runner.go:130] > # log_to_journald = false
	I0429 19:35:07.690769   49175 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0429 19:35:07.690776   49175 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0429 19:35:07.690781   49175 command_runner.go:130] > # Path to directory for container attach sockets.
	I0429 19:35:07.690787   49175 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0429 19:35:07.690793   49175 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0429 19:35:07.690799   49175 command_runner.go:130] > # bind_mount_prefix = ""
	I0429 19:35:07.690804   49175 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0429 19:35:07.690816   49175 command_runner.go:130] > # read_only = false
	I0429 19:35:07.690825   49175 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0429 19:35:07.690833   49175 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0429 19:35:07.690840   49175 command_runner.go:130] > # live configuration reload.
	I0429 19:35:07.690844   49175 command_runner.go:130] > # log_level = "info"
	I0429 19:35:07.690852   49175 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0429 19:35:07.690857   49175 command_runner.go:130] > # This option supports live configuration reload.
	I0429 19:35:07.690863   49175 command_runner.go:130] > # log_filter = ""
	I0429 19:35:07.690869   49175 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0429 19:35:07.690878   49175 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0429 19:35:07.690884   49175 command_runner.go:130] > # separated by comma.
	I0429 19:35:07.690891   49175 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 19:35:07.690897   49175 command_runner.go:130] > # uid_mappings = ""
	I0429 19:35:07.690903   49175 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0429 19:35:07.690911   49175 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0429 19:35:07.690915   49175 command_runner.go:130] > # separated by comma.
	I0429 19:35:07.690924   49175 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 19:35:07.690933   49175 command_runner.go:130] > # gid_mappings = ""
	I0429 19:35:07.690939   49175 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0429 19:35:07.690948   49175 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0429 19:35:07.690956   49175 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0429 19:35:07.690965   49175 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 19:35:07.690971   49175 command_runner.go:130] > # minimum_mappable_uid = -1
	I0429 19:35:07.690977   49175 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0429 19:35:07.690985   49175 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0429 19:35:07.690993   49175 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0429 19:35:07.691003   49175 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0429 19:35:07.691009   49175 command_runner.go:130] > # minimum_mappable_gid = -1
	I0429 19:35:07.691015   49175 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0429 19:35:07.691023   49175 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0429 19:35:07.691034   49175 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0429 19:35:07.691040   49175 command_runner.go:130] > # ctr_stop_timeout = 30
	I0429 19:35:07.691045   49175 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0429 19:35:07.691053   49175 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0429 19:35:07.691060   49175 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0429 19:35:07.691065   49175 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0429 19:35:07.691077   49175 command_runner.go:130] > drop_infra_ctr = false
	I0429 19:35:07.691085   49175 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0429 19:35:07.691092   49175 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0429 19:35:07.691101   49175 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0429 19:35:07.691107   49175 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0429 19:35:07.691113   49175 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0429 19:35:07.691121   49175 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0429 19:35:07.691129   49175 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0429 19:35:07.691136   49175 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0429 19:35:07.691142   49175 command_runner.go:130] > # shared_cpuset = ""
	I0429 19:35:07.691148   49175 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0429 19:35:07.691155   49175 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0429 19:35:07.691159   49175 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0429 19:35:07.691169   49175 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0429 19:35:07.691173   49175 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0429 19:35:07.691181   49175 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0429 19:35:07.691189   49175 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0429 19:35:07.691196   49175 command_runner.go:130] > # enable_criu_support = false
	I0429 19:35:07.691201   49175 command_runner.go:130] > # Enable/disable the generation of the container,
	I0429 19:35:07.691209   49175 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0429 19:35:07.691215   49175 command_runner.go:130] > # enable_pod_events = false
	I0429 19:35:07.691221   49175 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0429 19:35:07.691230   49175 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0429 19:35:07.691238   49175 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0429 19:35:07.691243   49175 command_runner.go:130] > # default_runtime = "runc"
	I0429 19:35:07.691250   49175 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0429 19:35:07.691257   49175 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0429 19:35:07.691268   49175 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0429 19:35:07.691275   49175 command_runner.go:130] > # creation as a file is not desired either.
	I0429 19:35:07.691283   49175 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0429 19:35:07.691289   49175 command_runner.go:130] > # the hostname is being managed dynamically.
	I0429 19:35:07.691294   49175 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0429 19:35:07.691299   49175 command_runner.go:130] > # ]
	I0429 19:35:07.691305   49175 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0429 19:35:07.691313   49175 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0429 19:35:07.691322   49175 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0429 19:35:07.691334   49175 command_runner.go:130] > # Each entry in the table should follow the format:
	I0429 19:35:07.691340   49175 command_runner.go:130] > #
	I0429 19:35:07.691345   49175 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0429 19:35:07.691352   49175 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0429 19:35:07.691402   49175 command_runner.go:130] > # runtime_type = "oci"
	I0429 19:35:07.691411   49175 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0429 19:35:07.691415   49175 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0429 19:35:07.691420   49175 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0429 19:35:07.691424   49175 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0429 19:35:07.691428   49175 command_runner.go:130] > # monitor_env = []
	I0429 19:35:07.691433   49175 command_runner.go:130] > # privileged_without_host_devices = false
	I0429 19:35:07.691441   49175 command_runner.go:130] > # allowed_annotations = []
	I0429 19:35:07.691449   49175 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0429 19:35:07.691456   49175 command_runner.go:130] > # Where:
	I0429 19:35:07.691461   49175 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0429 19:35:07.691469   49175 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0429 19:35:07.691477   49175 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0429 19:35:07.691486   49175 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0429 19:35:07.691494   49175 command_runner.go:130] > #   in $PATH.
	I0429 19:35:07.691501   49175 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0429 19:35:07.691508   49175 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0429 19:35:07.691514   49175 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0429 19:35:07.691520   49175 command_runner.go:130] > #   state.
	I0429 19:35:07.691526   49175 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0429 19:35:07.691534   49175 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0429 19:35:07.691539   49175 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0429 19:35:07.691547   49175 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0429 19:35:07.691553   49175 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0429 19:35:07.691561   49175 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0429 19:35:07.691567   49175 command_runner.go:130] > #   The currently recognized values are:
	I0429 19:35:07.691574   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0429 19:35:07.691582   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0429 19:35:07.691590   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0429 19:35:07.691598   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0429 19:35:07.691605   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0429 19:35:07.691614   49175 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0429 19:35:07.691629   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0429 19:35:07.691647   49175 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0429 19:35:07.691652   49175 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0429 19:35:07.691659   49175 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0429 19:35:07.691665   49175 command_runner.go:130] > #   deprecated option "conmon".
	I0429 19:35:07.691672   49175 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0429 19:35:07.691679   49175 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0429 19:35:07.691685   49175 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0429 19:35:07.691693   49175 command_runner.go:130] > #   should be moved to the container's cgroup
	I0429 19:35:07.691701   49175 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0429 19:35:07.691708   49175 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0429 19:35:07.691714   49175 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0429 19:35:07.691722   49175 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0429 19:35:07.691725   49175 command_runner.go:130] > #
	I0429 19:35:07.691729   49175 command_runner.go:130] > # Using the seccomp notifier feature:
	I0429 19:35:07.691737   49175 command_runner.go:130] > #
	I0429 19:35:07.691743   49175 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0429 19:35:07.691751   49175 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0429 19:35:07.691757   49175 command_runner.go:130] > #
	I0429 19:35:07.691763   49175 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0429 19:35:07.691771   49175 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0429 19:35:07.691777   49175 command_runner.go:130] > #
	I0429 19:35:07.691783   49175 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0429 19:35:07.691788   49175 command_runner.go:130] > # feature.
	I0429 19:35:07.691791   49175 command_runner.go:130] > #
	I0429 19:35:07.691799   49175 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0429 19:35:07.691805   49175 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0429 19:35:07.691814   49175 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0429 19:35:07.691822   49175 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0429 19:35:07.691831   49175 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0429 19:35:07.691834   49175 command_runner.go:130] > #
	I0429 19:35:07.691843   49175 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0429 19:35:07.691851   49175 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0429 19:35:07.691855   49175 command_runner.go:130] > #
	I0429 19:35:07.691860   49175 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0429 19:35:07.691868   49175 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0429 19:35:07.691876   49175 command_runner.go:130] > #
	I0429 19:35:07.691884   49175 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0429 19:35:07.691892   49175 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0429 19:35:07.691898   49175 command_runner.go:130] > # limitation.
	I0429 19:35:07.691903   49175 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0429 19:35:07.691910   49175 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0429 19:35:07.691914   49175 command_runner.go:130] > runtime_type = "oci"
	I0429 19:35:07.691920   49175 command_runner.go:130] > runtime_root = "/run/runc"
	I0429 19:35:07.691924   49175 command_runner.go:130] > runtime_config_path = ""
	I0429 19:35:07.691931   49175 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0429 19:35:07.691939   49175 command_runner.go:130] > monitor_cgroup = "pod"
	I0429 19:35:07.691946   49175 command_runner.go:130] > monitor_exec_cgroup = ""
	I0429 19:35:07.691949   49175 command_runner.go:130] > monitor_env = [
	I0429 19:35:07.691955   49175 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0429 19:35:07.691961   49175 command_runner.go:130] > ]
	I0429 19:35:07.691965   49175 command_runner.go:130] > privileged_without_host_devices = false
	I0429 19:35:07.691974   49175 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0429 19:35:07.691982   49175 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0429 19:35:07.691991   49175 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0429 19:35:07.692002   49175 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0429 19:35:07.692013   49175 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0429 19:35:07.692021   49175 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0429 19:35:07.692032   49175 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0429 19:35:07.692042   49175 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0429 19:35:07.692049   49175 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0429 19:35:07.692058   49175 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0429 19:35:07.692064   49175 command_runner.go:130] > # Example:
	I0429 19:35:07.692069   49175 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0429 19:35:07.692076   49175 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0429 19:35:07.692080   49175 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0429 19:35:07.692087   49175 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0429 19:35:07.692091   49175 command_runner.go:130] > # cpuset = 0
	I0429 19:35:07.692097   49175 command_runner.go:130] > # cpushares = "0-1"
	I0429 19:35:07.692100   49175 command_runner.go:130] > # Where:
	I0429 19:35:07.692107   49175 command_runner.go:130] > # The workload name is workload-type.
	I0429 19:35:07.692114   49175 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0429 19:35:07.692126   49175 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0429 19:35:07.692136   49175 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0429 19:35:07.692146   49175 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0429 19:35:07.692152   49175 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0429 19:35:07.692160   49175 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0429 19:35:07.692168   49175 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0429 19:35:07.692175   49175 command_runner.go:130] > # Default value is set to true
	I0429 19:35:07.692180   49175 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0429 19:35:07.692190   49175 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0429 19:35:07.692197   49175 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0429 19:35:07.692202   49175 command_runner.go:130] > # Default value is set to 'false'
	I0429 19:35:07.692207   49175 command_runner.go:130] > # disable_hostport_mapping = false
	I0429 19:35:07.692214   49175 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0429 19:35:07.692219   49175 command_runner.go:130] > #
	I0429 19:35:07.692224   49175 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0429 19:35:07.692230   49175 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0429 19:35:07.692236   49175 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0429 19:35:07.692242   49175 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0429 19:35:07.692249   49175 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0429 19:35:07.692252   49175 command_runner.go:130] > [crio.image]
	I0429 19:35:07.692258   49175 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0429 19:35:07.692262   49175 command_runner.go:130] > # default_transport = "docker://"
	I0429 19:35:07.692268   49175 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0429 19:35:07.692273   49175 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0429 19:35:07.692277   49175 command_runner.go:130] > # global_auth_file = ""
	I0429 19:35:07.692281   49175 command_runner.go:130] > # The image used to instantiate infra containers.
	I0429 19:35:07.692286   49175 command_runner.go:130] > # This option supports live configuration reload.
	I0429 19:35:07.692290   49175 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0429 19:35:07.692295   49175 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0429 19:35:07.692304   49175 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0429 19:35:07.692308   49175 command_runner.go:130] > # This option supports live configuration reload.
	I0429 19:35:07.692312   49175 command_runner.go:130] > # pause_image_auth_file = ""
	I0429 19:35:07.692318   49175 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0429 19:35:07.692340   49175 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0429 19:35:07.692346   49175 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0429 19:35:07.692351   49175 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0429 19:35:07.692359   49175 command_runner.go:130] > # pause_command = "/pause"
	I0429 19:35:07.692365   49175 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0429 19:35:07.692375   49175 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0429 19:35:07.692380   49175 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0429 19:35:07.692388   49175 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0429 19:35:07.692394   49175 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0429 19:35:07.692399   49175 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0429 19:35:07.692402   49175 command_runner.go:130] > # pinned_images = [
	I0429 19:35:07.692406   49175 command_runner.go:130] > # ]
	I0429 19:35:07.692411   49175 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0429 19:35:07.692417   49175 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0429 19:35:07.692423   49175 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0429 19:35:07.692429   49175 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0429 19:35:07.692437   49175 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0429 19:35:07.692440   49175 command_runner.go:130] > # signature_policy = ""
	I0429 19:35:07.692445   49175 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0429 19:35:07.692453   49175 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0429 19:35:07.692460   49175 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0429 19:35:07.692470   49175 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0429 19:35:07.692478   49175 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0429 19:35:07.692485   49175 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0429 19:35:07.692491   49175 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0429 19:35:07.692499   49175 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0429 19:35:07.692505   49175 command_runner.go:130] > # changing them here.
	I0429 19:35:07.692510   49175 command_runner.go:130] > # insecure_registries = [
	I0429 19:35:07.692515   49175 command_runner.go:130] > # ]
	I0429 19:35:07.692522   49175 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0429 19:35:07.692529   49175 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0429 19:35:07.692533   49175 command_runner.go:130] > # image_volumes = "mkdir"
	I0429 19:35:07.692538   49175 command_runner.go:130] > # Temporary directory to use for storing big files
	I0429 19:35:07.692544   49175 command_runner.go:130] > # big_files_temporary_dir = ""
	I0429 19:35:07.692550   49175 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0429 19:35:07.692556   49175 command_runner.go:130] > # CNI plugins.
	I0429 19:35:07.692560   49175 command_runner.go:130] > [crio.network]
	I0429 19:35:07.692569   49175 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0429 19:35:07.692576   49175 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0429 19:35:07.692585   49175 command_runner.go:130] > # cni_default_network = ""
	I0429 19:35:07.692593   49175 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0429 19:35:07.692598   49175 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0429 19:35:07.692606   49175 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0429 19:35:07.692612   49175 command_runner.go:130] > # plugin_dirs = [
	I0429 19:35:07.692615   49175 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0429 19:35:07.692621   49175 command_runner.go:130] > # ]
	I0429 19:35:07.692626   49175 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0429 19:35:07.692631   49175 command_runner.go:130] > [crio.metrics]
	I0429 19:35:07.692642   49175 command_runner.go:130] > # Globally enable or disable metrics support.
	I0429 19:35:07.692646   49175 command_runner.go:130] > enable_metrics = true
	I0429 19:35:07.692650   49175 command_runner.go:130] > # Specify enabled metrics collectors.
	I0429 19:35:07.692657   49175 command_runner.go:130] > # Per default all metrics are enabled.
	I0429 19:35:07.692663   49175 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0429 19:35:07.692671   49175 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0429 19:35:07.692679   49175 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0429 19:35:07.692683   49175 command_runner.go:130] > # metrics_collectors = [
	I0429 19:35:07.692689   49175 command_runner.go:130] > # 	"operations",
	I0429 19:35:07.692694   49175 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0429 19:35:07.692701   49175 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0429 19:35:07.692705   49175 command_runner.go:130] > # 	"operations_errors",
	I0429 19:35:07.692711   49175 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0429 19:35:07.692715   49175 command_runner.go:130] > # 	"image_pulls_by_name",
	I0429 19:35:07.692719   49175 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0429 19:35:07.692728   49175 command_runner.go:130] > # 	"image_pulls_failures",
	I0429 19:35:07.692735   49175 command_runner.go:130] > # 	"image_pulls_successes",
	I0429 19:35:07.692739   49175 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0429 19:35:07.692746   49175 command_runner.go:130] > # 	"image_layer_reuse",
	I0429 19:35:07.692750   49175 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0429 19:35:07.692756   49175 command_runner.go:130] > # 	"containers_oom_total",
	I0429 19:35:07.692760   49175 command_runner.go:130] > # 	"containers_oom",
	I0429 19:35:07.692766   49175 command_runner.go:130] > # 	"processes_defunct",
	I0429 19:35:07.692770   49175 command_runner.go:130] > # 	"operations_total",
	I0429 19:35:07.692776   49175 command_runner.go:130] > # 	"operations_latency_seconds",
	I0429 19:35:07.692781   49175 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0429 19:35:07.692787   49175 command_runner.go:130] > # 	"operations_errors_total",
	I0429 19:35:07.692796   49175 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0429 19:35:07.692804   49175 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0429 19:35:07.692808   49175 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0429 19:35:07.692814   49175 command_runner.go:130] > # 	"image_pulls_success_total",
	I0429 19:35:07.692819   49175 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0429 19:35:07.692823   49175 command_runner.go:130] > # 	"containers_oom_count_total",
	I0429 19:35:07.692828   49175 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0429 19:35:07.692835   49175 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0429 19:35:07.692838   49175 command_runner.go:130] > # ]
	I0429 19:35:07.692846   49175 command_runner.go:130] > # The port on which the metrics server will listen.
	I0429 19:35:07.692850   49175 command_runner.go:130] > # metrics_port = 9090
	I0429 19:35:07.692858   49175 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0429 19:35:07.692861   49175 command_runner.go:130] > # metrics_socket = ""
	I0429 19:35:07.692869   49175 command_runner.go:130] > # The certificate for the secure metrics server.
	I0429 19:35:07.692875   49175 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0429 19:35:07.692883   49175 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0429 19:35:07.692890   49175 command_runner.go:130] > # certificate on any modification event.
	I0429 19:35:07.692894   49175 command_runner.go:130] > # metrics_cert = ""
	I0429 19:35:07.692902   49175 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0429 19:35:07.692907   49175 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0429 19:35:07.692913   49175 command_runner.go:130] > # metrics_key = ""
	I0429 19:35:07.692918   49175 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0429 19:35:07.692924   49175 command_runner.go:130] > [crio.tracing]
	I0429 19:35:07.692930   49175 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0429 19:35:07.692937   49175 command_runner.go:130] > # enable_tracing = false
	I0429 19:35:07.692942   49175 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0429 19:35:07.692949   49175 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0429 19:35:07.692956   49175 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0429 19:35:07.692963   49175 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0429 19:35:07.692967   49175 command_runner.go:130] > # CRI-O NRI configuration.
	I0429 19:35:07.692973   49175 command_runner.go:130] > [crio.nri]
	I0429 19:35:07.692978   49175 command_runner.go:130] > # Globally enable or disable NRI.
	I0429 19:35:07.692983   49175 command_runner.go:130] > # enable_nri = false
	I0429 19:35:07.692990   49175 command_runner.go:130] > # NRI socket to listen on.
	I0429 19:35:07.692997   49175 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0429 19:35:07.693001   49175 command_runner.go:130] > # NRI plugin directory to use.
	I0429 19:35:07.693013   49175 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0429 19:35:07.693020   49175 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0429 19:35:07.693025   49175 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0429 19:35:07.693033   49175 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0429 19:35:07.693040   49175 command_runner.go:130] > # nri_disable_connections = false
	I0429 19:35:07.693045   49175 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0429 19:35:07.693052   49175 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0429 19:35:07.693057   49175 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0429 19:35:07.693063   49175 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0429 19:35:07.693069   49175 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0429 19:35:07.693074   49175 command_runner.go:130] > [crio.stats]
	I0429 19:35:07.693084   49175 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0429 19:35:07.693092   49175 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0429 19:35:07.693096   49175 command_runner.go:130] > # stats_collection_period = 0
	I0429 19:35:07.693288   49175 cni.go:84] Creating CNI manager for ""
	I0429 19:35:07.693305   49175 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0429 19:35:07.693336   49175 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 19:35:07.693362   49175 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.127 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-773806 NodeName:multinode-773806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 19:35:07.693494   49175 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-773806"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.127
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.127"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 19:35:07.693562   49175 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:35:07.706246   49175 command_runner.go:130] > kubeadm
	I0429 19:35:07.706269   49175 command_runner.go:130] > kubectl
	I0429 19:35:07.706274   49175 command_runner.go:130] > kubelet
	I0429 19:35:07.706294   49175 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 19:35:07.706338   49175 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 19:35:07.718079   49175 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0429 19:35:07.737985   49175 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:35:07.770879   49175 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0429 19:35:07.807852   49175 ssh_runner.go:195] Run: grep 192.168.39.127	control-plane.minikube.internal$ /etc/hosts
	I0429 19:35:07.813017   49175 command_runner.go:130] > 192.168.39.127	control-plane.minikube.internal
	I0429 19:35:07.813096   49175 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:35:07.961630   49175 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:35:07.979508   49175 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806 for IP: 192.168.39.127
	I0429 19:35:07.979531   49175 certs.go:194] generating shared ca certs ...
	I0429 19:35:07.979551   49175 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:35:07.979707   49175 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:35:07.979774   49175 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:35:07.979789   49175 certs.go:256] generating profile certs ...
	I0429 19:35:07.979890   49175 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/client.key
	I0429 19:35:07.979977   49175 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/apiserver.key.a5d6a352
	I0429 19:35:07.980030   49175 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/proxy-client.key
	I0429 19:35:07.980043   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 19:35:07.980064   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 19:35:07.980081   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 19:35:07.980097   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 19:35:07.980115   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 19:35:07.980133   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 19:35:07.980153   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 19:35:07.980169   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 19:35:07.980228   49175 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:35:07.980294   49175 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:35:07.980308   49175 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:35:07.980339   49175 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:35:07.980385   49175 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:35:07.980415   49175 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:35:07.980467   49175 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:35:07.980509   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:35:07.980527   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem -> /usr/share/ca-certificates/15124.pem
	I0429 19:35:07.980541   49175 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> /usr/share/ca-certificates/151242.pem
	I0429 19:35:07.981294   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:35:08.009976   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:35:08.038420   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:35:08.067069   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:35:08.095154   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 19:35:08.120666   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 19:35:08.150089   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:35:08.178991   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/multinode-773806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 19:35:08.208626   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:35:08.236974   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:35:08.264473   49175 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:35:08.292510   49175 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 19:35:08.311420   49175 ssh_runner.go:195] Run: openssl version
	I0429 19:35:08.317914   49175 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0429 19:35:08.317992   49175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:35:08.329630   49175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:35:08.334743   49175 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:35:08.334767   49175 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:35:08.334818   49175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:35:08.340731   49175 command_runner.go:130] > b5213941
	I0429 19:35:08.340905   49175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:35:08.350777   49175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:35:08.362197   49175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:35:08.367049   49175 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:35:08.367068   49175 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:35:08.367099   49175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:35:08.373161   49175 command_runner.go:130] > 51391683
	I0429 19:35:08.373206   49175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:35:08.383385   49175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:35:08.396637   49175 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:35:08.401890   49175 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:35:08.402115   49175 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:35:08.402161   49175 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:35:08.408789   49175 command_runner.go:130] > 3ec20f2e
	I0429 19:35:08.408956   49175 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:35:08.420245   49175 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:35:08.425242   49175 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:35:08.425272   49175 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0429 19:35:08.425281   49175 command_runner.go:130] > Device: 253,1	Inode: 9433622     Links: 1
	I0429 19:35:08.425290   49175 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0429 19:35:08.425299   49175 command_runner.go:130] > Access: 2024-04-29 19:28:47.186812513 +0000
	I0429 19:35:08.425306   49175 command_runner.go:130] > Modify: 2024-04-29 19:28:47.186812513 +0000
	I0429 19:35:08.425314   49175 command_runner.go:130] > Change: 2024-04-29 19:28:47.186812513 +0000
	I0429 19:35:08.425322   49175 command_runner.go:130] >  Birth: 2024-04-29 19:28:47.186812513 +0000
	I0429 19:35:08.425440   49175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 19:35:08.432283   49175 command_runner.go:130] > Certificate will not expire
	I0429 19:35:08.432361   49175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 19:35:08.438794   49175 command_runner.go:130] > Certificate will not expire
	I0429 19:35:08.438855   49175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 19:35:08.444991   49175 command_runner.go:130] > Certificate will not expire
	I0429 19:35:08.445052   49175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 19:35:08.451116   49175 command_runner.go:130] > Certificate will not expire
	I0429 19:35:08.451165   49175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 19:35:08.457133   49175 command_runner.go:130] > Certificate will not expire
	I0429 19:35:08.457197   49175 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 19:35:08.463347   49175 command_runner.go:130] > Certificate will not expire
	I0429 19:35:08.463418   49175 kubeadm.go:391] StartCluster: {Name:multinode-773806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-773806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.211 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.107 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:35:08.463553   49175 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 19:35:08.463624   49175 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 19:35:08.508113   49175 command_runner.go:130] > e1a626f59ab5873b1c7e06e8347139a4f3f9851df447bfeab7fb730a33cb663e
	I0429 19:35:08.508139   49175 command_runner.go:130] > 46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18
	I0429 19:35:08.508145   49175 command_runner.go:130] > 19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01
	I0429 19:35:08.508152   49175 command_runner.go:130] > 305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35
	I0429 19:35:08.508157   49175 command_runner.go:130] > e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7
	I0429 19:35:08.508163   49175 command_runner.go:130] > 6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e
	I0429 19:35:08.508172   49175 command_runner.go:130] > 28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe
	I0429 19:35:08.508184   49175 command_runner.go:130] > bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d
	I0429 19:35:08.508206   49175 cri.go:89] found id: "e1a626f59ab5873b1c7e06e8347139a4f3f9851df447bfeab7fb730a33cb663e"
	I0429 19:35:08.508221   49175 cri.go:89] found id: "46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18"
	I0429 19:35:08.508225   49175 cri.go:89] found id: "19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01"
	I0429 19:35:08.508227   49175 cri.go:89] found id: "305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35"
	I0429 19:35:08.508230   49175 cri.go:89] found id: "e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7"
	I0429 19:35:08.508233   49175 cri.go:89] found id: "6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e"
	I0429 19:35:08.508236   49175 cri.go:89] found id: "28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe"
	I0429 19:35:08.508238   49175 cri.go:89] found id: "bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d"
	I0429 19:35:08.508240   49175 cri.go:89] found id: ""
	I0429 19:35:08.508289   49175 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.464829975Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714419543464792653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afb16048-fcb4-4be2-8dcc-819a540c9b28 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.466505375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa0c027d-8a85-45c0-b76a-8a50d239c9e2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.466614764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa0c027d-8a85-45c0-b76a-8a50d239c9e2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.467231605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2fb326dbcef57d3bbe95233b16e022fd5fd3bae33ebe5c87a0f51055bc8ba80,PodSandboxId:a1a2e94cb6ac094ec3b9afe7a6c834b99be78ab0c64491ac723c2f3348dbf2ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714419349345422670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44b8ef5992602486837e2ea2c56864636442ed442c246e5a5b9bb93be932e23,PodSandboxId:75563ac3377fd24238989285dcc59268e3e68a7f3ac2bf979f9aa274e632cb71,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714419315781447172,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9942452293a11f80f22b277a2fcee01abf0e38a51bb3f6b45ddf1dc524b557c,PodSandboxId:db8694fe181b12d57d9f8ad1388d2877a27870b9d79d25be37cb341800d19d64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714419315841918559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23b20c1a1888c715e25c28dfd27a4f61f8d433f9e836b9c39c6ca7f3ca0e7e8,PodSandboxId:e08d32d1c554ab6ee30b17103ecab11ce8b4285dfb14df434c78f7cf90ab90af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714419315681113024,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13-512c5d336ba7,},Annotations:map[string]
string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e6a58f579243e6cb3e6f6861dd1bf66e9ee1f4ded82d6a10d8f7cd75afd355,PodSandboxId:4827f71827df8e22f2250ea6970f6a61ce0670ad91924c0f52353449cfb3e929,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714419315583586336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd171b7365ef28c752b6dbfa8eeb2824617f2c787b80af5ed48d968ff20b759d,PodSandboxId:8174c871a80838577b4f378024621f1af603736df3ca9b693241b14941cce240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714419310832413911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string{io.kubernetes.container.hash: a99f5bf3,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158117fc5586ddd5f255b607d0890364bb2620e5f780e3a30ca08d378dd8fe43,PodSandboxId:0e348a729fef589e316cd04ed9245bbd2519fb2105fbcfd5ed2b2313bcbaeb26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714419310758349616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c33
51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f524cad554a80a5d6a27ba6563ea8c8f621a795a1c50623338c8fe8a4115da,PodSandboxId:9fdd8a3bf7b4dff2043f01be84ceb0a9d0ade12d113d067ba3dbfba615de478b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714419310804105150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:312d2cc38cb7921577370967c3e1f1355c1f3e19a6e1ebea1e5999e69c8051c0,PodSandboxId:5808bb5d0b52c2b6dcd28fa3fa0dc470cbb95cd8b346386727d82a0301a6cf36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714419310709972037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash: aa7fe539,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc0ee6bf1c03cbcbd4ea4e5e6c9c2987263bd71212a7b23368d9db518e3ee6c,PodSandboxId:17c1759c31d692f9a1470aaeddd37ee4d782a38b9a37d65fe7d268921c5f9769,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714419004298773183,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a626f59ab5873b1c7e06e8347139a4f3f9851df447bfeab7fb730a33cb663e,PodSandboxId:49b427cb0ae262db48c72ae12d892b4ce23714e79d39be3d0f35b13099ea33c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714418953469366757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.kubernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18,PodSandboxId:c358abeb705fe27b6a791b10ec94d1e5828461489d28558b394000231adb4b11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714418952426483579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01,PodSandboxId:7351f900961919b09ee26ab9d5462cb8c1299c10ed067fc93a0598d12586b2b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714418951015992455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35,PodSandboxId:8df979e0df5a6155c590f8fc519306e7a0e281480e2c8436ede54e4efe5bb98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714418950702509128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13
-512c5d336ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e,PodSandboxId:5459600487f294a104c1c7cb36f5789086d522e13fb1ac3a8f05a968d807cef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714418930908278427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string
{io.kubernetes.container.hash: a99f5bf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7,PodSandboxId:54315db19ed4f14de6fecfa2d7ad4da6365acd618a5e499021386541c4ffc12f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714418930914531932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.
container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe,PodSandboxId:423ec7fceda9b25192a04cb7f9665345a665bc725ed13d676cbd75238fdd5c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714418930824968380,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash:
aa7fe539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d,PodSandboxId:ca30f74c7f5dd7894b5c7a3709754dc478c207446f3e2aeade363d17f1f4f653,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714418930818106797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa0c027d-8a85-45c0-b76a-8a50d239c9e2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.525254163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8bce6264-d1ae-405a-9012-9e61785ddbd5 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.525400823Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8bce6264-d1ae-405a-9012-9e61785ddbd5 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.527729076Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47e53dad-7409-417d-b934-0117edfc8804 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.528766201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714419543528729223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47e53dad-7409-417d-b934-0117edfc8804 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.529598421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a72b6480-64e2-48b5-8de0-f6a3e6e0b88a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.529731069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a72b6480-64e2-48b5-8de0-f6a3e6e0b88a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.530107387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2fb326dbcef57d3bbe95233b16e022fd5fd3bae33ebe5c87a0f51055bc8ba80,PodSandboxId:a1a2e94cb6ac094ec3b9afe7a6c834b99be78ab0c64491ac723c2f3348dbf2ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714419349345422670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44b8ef5992602486837e2ea2c56864636442ed442c246e5a5b9bb93be932e23,PodSandboxId:75563ac3377fd24238989285dcc59268e3e68a7f3ac2bf979f9aa274e632cb71,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714419315781447172,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9942452293a11f80f22b277a2fcee01abf0e38a51bb3f6b45ddf1dc524b557c,PodSandboxId:db8694fe181b12d57d9f8ad1388d2877a27870b9d79d25be37cb341800d19d64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714419315841918559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23b20c1a1888c715e25c28dfd27a4f61f8d433f9e836b9c39c6ca7f3ca0e7e8,PodSandboxId:e08d32d1c554ab6ee30b17103ecab11ce8b4285dfb14df434c78f7cf90ab90af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714419315681113024,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13-512c5d336ba7,},Annotations:map[string]
string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e6a58f579243e6cb3e6f6861dd1bf66e9ee1f4ded82d6a10d8f7cd75afd355,PodSandboxId:4827f71827df8e22f2250ea6970f6a61ce0670ad91924c0f52353449cfb3e929,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714419315583586336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd171b7365ef28c752b6dbfa8eeb2824617f2c787b80af5ed48d968ff20b759d,PodSandboxId:8174c871a80838577b4f378024621f1af603736df3ca9b693241b14941cce240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714419310832413911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string{io.kubernetes.container.hash: a99f5bf3,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158117fc5586ddd5f255b607d0890364bb2620e5f780e3a30ca08d378dd8fe43,PodSandboxId:0e348a729fef589e316cd04ed9245bbd2519fb2105fbcfd5ed2b2313bcbaeb26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714419310758349616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c33
51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f524cad554a80a5d6a27ba6563ea8c8f621a795a1c50623338c8fe8a4115da,PodSandboxId:9fdd8a3bf7b4dff2043f01be84ceb0a9d0ade12d113d067ba3dbfba615de478b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714419310804105150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:312d2cc38cb7921577370967c3e1f1355c1f3e19a6e1ebea1e5999e69c8051c0,PodSandboxId:5808bb5d0b52c2b6dcd28fa3fa0dc470cbb95cd8b346386727d82a0301a6cf36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714419310709972037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash: aa7fe539,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc0ee6bf1c03cbcbd4ea4e5e6c9c2987263bd71212a7b23368d9db518e3ee6c,PodSandboxId:17c1759c31d692f9a1470aaeddd37ee4d782a38b9a37d65fe7d268921c5f9769,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714419004298773183,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a626f59ab5873b1c7e06e8347139a4f3f9851df447bfeab7fb730a33cb663e,PodSandboxId:49b427cb0ae262db48c72ae12d892b4ce23714e79d39be3d0f35b13099ea33c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714418953469366757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.kubernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18,PodSandboxId:c358abeb705fe27b6a791b10ec94d1e5828461489d28558b394000231adb4b11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714418952426483579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01,PodSandboxId:7351f900961919b09ee26ab9d5462cb8c1299c10ed067fc93a0598d12586b2b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714418951015992455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35,PodSandboxId:8df979e0df5a6155c590f8fc519306e7a0e281480e2c8436ede54e4efe5bb98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714418950702509128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13
-512c5d336ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e,PodSandboxId:5459600487f294a104c1c7cb36f5789086d522e13fb1ac3a8f05a968d807cef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714418930908278427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string
{io.kubernetes.container.hash: a99f5bf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7,PodSandboxId:54315db19ed4f14de6fecfa2d7ad4da6365acd618a5e499021386541c4ffc12f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714418930914531932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.
container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe,PodSandboxId:423ec7fceda9b25192a04cb7f9665345a665bc725ed13d676cbd75238fdd5c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714418930824968380,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash:
aa7fe539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d,PodSandboxId:ca30f74c7f5dd7894b5c7a3709754dc478c207446f3e2aeade363d17f1f4f653,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714418930818106797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a72b6480-64e2-48b5-8de0-f6a3e6e0b88a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.583892359Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9908107-2ff7-4d6f-812d-15a6e496fd45 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.583993498Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9908107-2ff7-4d6f-812d-15a6e496fd45 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.585903243Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=881909f8-ff58-42af-a134-77638f5c66ea name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.586549435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714419543586522887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=881909f8-ff58-42af-a134-77638f5c66ea name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.587103317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02b62b08-59dd-473f-aab7-d6e642bdb416 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.587222905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02b62b08-59dd-473f-aab7-d6e642bdb416 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.588537054Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2fb326dbcef57d3bbe95233b16e022fd5fd3bae33ebe5c87a0f51055bc8ba80,PodSandboxId:a1a2e94cb6ac094ec3b9afe7a6c834b99be78ab0c64491ac723c2f3348dbf2ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714419349345422670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44b8ef5992602486837e2ea2c56864636442ed442c246e5a5b9bb93be932e23,PodSandboxId:75563ac3377fd24238989285dcc59268e3e68a7f3ac2bf979f9aa274e632cb71,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714419315781447172,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9942452293a11f80f22b277a2fcee01abf0e38a51bb3f6b45ddf1dc524b557c,PodSandboxId:db8694fe181b12d57d9f8ad1388d2877a27870b9d79d25be37cb341800d19d64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714419315841918559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23b20c1a1888c715e25c28dfd27a4f61f8d433f9e836b9c39c6ca7f3ca0e7e8,PodSandboxId:e08d32d1c554ab6ee30b17103ecab11ce8b4285dfb14df434c78f7cf90ab90af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714419315681113024,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13-512c5d336ba7,},Annotations:map[string]
string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e6a58f579243e6cb3e6f6861dd1bf66e9ee1f4ded82d6a10d8f7cd75afd355,PodSandboxId:4827f71827df8e22f2250ea6970f6a61ce0670ad91924c0f52353449cfb3e929,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714419315583586336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd171b7365ef28c752b6dbfa8eeb2824617f2c787b80af5ed48d968ff20b759d,PodSandboxId:8174c871a80838577b4f378024621f1af603736df3ca9b693241b14941cce240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714419310832413911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string{io.kubernetes.container.hash: a99f5bf3,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158117fc5586ddd5f255b607d0890364bb2620e5f780e3a30ca08d378dd8fe43,PodSandboxId:0e348a729fef589e316cd04ed9245bbd2519fb2105fbcfd5ed2b2313bcbaeb26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714419310758349616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c33
51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f524cad554a80a5d6a27ba6563ea8c8f621a795a1c50623338c8fe8a4115da,PodSandboxId:9fdd8a3bf7b4dff2043f01be84ceb0a9d0ade12d113d067ba3dbfba615de478b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714419310804105150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:312d2cc38cb7921577370967c3e1f1355c1f3e19a6e1ebea1e5999e69c8051c0,PodSandboxId:5808bb5d0b52c2b6dcd28fa3fa0dc470cbb95cd8b346386727d82a0301a6cf36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714419310709972037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash: aa7fe539,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc0ee6bf1c03cbcbd4ea4e5e6c9c2987263bd71212a7b23368d9db518e3ee6c,PodSandboxId:17c1759c31d692f9a1470aaeddd37ee4d782a38b9a37d65fe7d268921c5f9769,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714419004298773183,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a626f59ab5873b1c7e06e8347139a4f3f9851df447bfeab7fb730a33cb663e,PodSandboxId:49b427cb0ae262db48c72ae12d892b4ce23714e79d39be3d0f35b13099ea33c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714418953469366757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.kubernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18,PodSandboxId:c358abeb705fe27b6a791b10ec94d1e5828461489d28558b394000231adb4b11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714418952426483579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01,PodSandboxId:7351f900961919b09ee26ab9d5462cb8c1299c10ed067fc93a0598d12586b2b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714418951015992455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35,PodSandboxId:8df979e0df5a6155c590f8fc519306e7a0e281480e2c8436ede54e4efe5bb98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714418950702509128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13
-512c5d336ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e,PodSandboxId:5459600487f294a104c1c7cb36f5789086d522e13fb1ac3a8f05a968d807cef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714418930908278427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string
{io.kubernetes.container.hash: a99f5bf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7,PodSandboxId:54315db19ed4f14de6fecfa2d7ad4da6365acd618a5e499021386541c4ffc12f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714418930914531932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.
container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe,PodSandboxId:423ec7fceda9b25192a04cb7f9665345a665bc725ed13d676cbd75238fdd5c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714418930824968380,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash:
aa7fe539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d,PodSandboxId:ca30f74c7f5dd7894b5c7a3709754dc478c207446f3e2aeade363d17f1f4f653,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714418930818106797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02b62b08-59dd-473f-aab7-d6e642bdb416 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.640452488Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=131ce0c4-fd5d-4744-9b2a-f59c3af54bd4 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.640530957Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=131ce0c4-fd5d-4744-9b2a-f59c3af54bd4 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.645861969Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc2fe8f8-1bc1-469e-8ebb-5a3a3621081a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.646372072Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714419543646344053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc2fe8f8-1bc1-469e-8ebb-5a3a3621081a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.647330796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=343bc440-16dc-4976-a0aa-22639d227b7c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.647393554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=343bc440-16dc-4976-a0aa-22639d227b7c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:39:03 multinode-773806 crio[2847]: time="2024-04-29 19:39:03.647778744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a2fb326dbcef57d3bbe95233b16e022fd5fd3bae33ebe5c87a0f51055bc8ba80,PodSandboxId:a1a2e94cb6ac094ec3b9afe7a6c834b99be78ab0c64491ac723c2f3348dbf2ae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1714419349345422670,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44b8ef5992602486837e2ea2c56864636442ed442c246e5a5b9bb93be932e23,PodSandboxId:75563ac3377fd24238989285dcc59268e3e68a7f3ac2bf979f9aa274e632cb71,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1714419315781447172,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9942452293a11f80f22b277a2fcee01abf0e38a51bb3f6b45ddf1dc524b557c,PodSandboxId:db8694fe181b12d57d9f8ad1388d2877a27870b9d79d25be37cb341800d19d64,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714419315841918559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a23b20c1a1888c715e25c28dfd27a4f61f8d433f9e836b9c39c6ca7f3ca0e7e8,PodSandboxId:e08d32d1c554ab6ee30b17103ecab11ce8b4285dfb14df434c78f7cf90ab90af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714419315681113024,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13-512c5d336ba7,},Annotations:map[string]
string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e6a58f579243e6cb3e6f6861dd1bf66e9ee1f4ded82d6a10d8f7cd75afd355,PodSandboxId:4827f71827df8e22f2250ea6970f6a61ce0670ad91924c0f52353449cfb3e929,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714419315583586336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.ku
bernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd171b7365ef28c752b6dbfa8eeb2824617f2c787b80af5ed48d968ff20b759d,PodSandboxId:8174c871a80838577b4f378024621f1af603736df3ca9b693241b14941cce240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714419310832413911,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string{io.kubernetes.container.hash: a99f5bf3,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158117fc5586ddd5f255b607d0890364bb2620e5f780e3a30ca08d378dd8fe43,PodSandboxId:0e348a729fef589e316cd04ed9245bbd2519fb2105fbcfd5ed2b2313bcbaeb26,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714419310758349616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.container.hash: 933c33
51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27f524cad554a80a5d6a27ba6563ea8c8f621a795a1c50623338c8fe8a4115da,PodSandboxId:9fdd8a3bf7b4dff2043f01be84ceb0a9d0ade12d113d067ba3dbfba615de478b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714419310804105150,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:312d2cc38cb7921577370967c3e1f1355c1f3e19a6e1ebea1e5999e69c8051c0,PodSandboxId:5808bb5d0b52c2b6dcd28fa3fa0dc470cbb95cd8b346386727d82a0301a6cf36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714419310709972037,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash: aa7fe539,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bc0ee6bf1c03cbcbd4ea4e5e6c9c2987263bd71212a7b23368d9db518e3ee6c,PodSandboxId:17c1759c31d692f9a1470aaeddd37ee4d782a38b9a37d65fe7d268921c5f9769,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1714419004298773183,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-b9pvl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4e08525-845b-423c-8481-20addac1f5e7,},Annotations:map[string]string{io.kubernetes.container.hash: cfdaf4d5,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1a626f59ab5873b1c7e06e8347139a4f3f9851df447bfeab7fb730a33cb663e,PodSandboxId:49b427cb0ae262db48c72ae12d892b4ce23714e79d39be3d0f35b13099ea33c7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714418953469366757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a28cf547-261c-4662-bd9c-4966ca3cdfd1,},Annotations:map[string]string{io.kubernetes.container.hash: 723b21f0,io.kubernetes.container.restartCount: 0,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18,PodSandboxId:c358abeb705fe27b6a791b10ec94d1e5828461489d28558b394000231adb4b11,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714418952426483579,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vdv7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 916bfb3a-8ecd-470b-9ae4-615beffd9990,},Annotations:map[string]string{io.kubernetes.container.hash: 14ea886c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\
"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01,PodSandboxId:7351f900961919b09ee26ab9d5462cb8c1299c10ed067fc93a0598d12586b2b0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1714418951015992455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vdl58,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6f195859-a11d-4707-b0e8-92b7164c397d,},Annotations:map[string]string{io.kubernetes.container.hash: d1696e59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35,PodSandboxId:8df979e0df5a6155c590f8fc519306e7a0e281480e2c8436ede54e4efe5bb98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714418950702509128,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vfsvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca6e7675-8035-4977-9d13
-512c5d336ba7,},Annotations:map[string]string{io.kubernetes.container.hash: 659885aa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e,PodSandboxId:5459600487f294a104c1c7cb36f5789086d522e13fb1ac3a8f05a968d807cef4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714418930908278427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ec2119e0b44dfd6dc5b4e8438afbf52,},Annotations:map[string]string
{io.kubernetes.container.hash: a99f5bf3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7,PodSandboxId:54315db19ed4f14de6fecfa2d7ad4da6365acd618a5e499021386541c4ffc12f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714418930914531932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9751f17d8a6ed92a2217781111ae40ab,},Annotations:map[string]string{io.kubernetes.
container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe,PodSandboxId:423ec7fceda9b25192a04cb7f9665345a665bc725ed13d676cbd75238fdd5c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714418930824968380,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa29ceace505678157206b79402fef09,},Annotations:map[string]string{io.kubernetes.container.hash:
aa7fe539,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d,PodSandboxId:ca30f74c7f5dd7894b5c7a3709754dc478c207446f3e2aeade363d17f1f4f653,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714418930818106797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-773806,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a75c0b69ef7d351115644532878043fc,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=343bc440-16dc-4976-a0aa-22639d227b7c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a2fb326dbcef5       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   a1a2e94cb6ac0       busybox-fc5497c4f-b9pvl
	d9942452293a1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   db8694fe181b1       coredns-7db6d8ff4d-vdv7z
	f44b8ef599260       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   75563ac3377fd       kindnet-vdl58
	a23b20c1a1888       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago       Running             kube-proxy                1                   e08d32d1c554a       kube-proxy-vfsvr
	33e6a58f57924       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   4827f71827df8       storage-provisioner
	dd171b7365ef2       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   8174c871a8083       etcd-multinode-773806
	27f524cad554a       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago       Running             kube-scheduler            1                   9fdd8a3bf7b4d       kube-scheduler-multinode-773806
	158117fc5586d       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago       Running             kube-controller-manager   1                   0e348a729fef5       kube-controller-manager-multinode-773806
	312d2cc38cb79       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago       Running             kube-apiserver            1                   5808bb5d0b52c       kube-apiserver-multinode-773806
	6bc0ee6bf1c03       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   17c1759c31d69       busybox-fc5497c4f-b9pvl
	e1a626f59ab58       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   49b427cb0ae26       storage-provisioner
	46ad3d852252a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   c358abeb705fe       coredns-7db6d8ff4d-vdv7z
	19c5032fd428a       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      9 minutes ago       Exited              kindnet-cni               0                   7351f90096191       kindnet-vdl58
	305781b9713c9       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      9 minutes ago       Exited              kube-proxy                0                   8df979e0df5a6       kube-proxy-vfsvr
	e81cb921a76b2       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      10 minutes ago      Exited              kube-scheduler            0                   54315db19ed4f       kube-scheduler-multinode-773806
	6fb17aa0e298d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   5459600487f29       etcd-multinode-773806
	28805d1b207fa       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      10 minutes ago      Exited              kube-apiserver            0                   423ec7fceda9b       kube-apiserver-multinode-773806
	bbd23693658e9       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      10 minutes ago      Exited              kube-controller-manager   0                   ca30f74c7f5dd       kube-controller-manager-multinode-773806
	
	
	==> coredns [46ad3d852252a4ce94367ce664fdc628fd1b5c544112321dd690d95ef57a0a18] <==
	[INFO] 10.244.1.2:59402 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002177369s
	[INFO] 10.244.1.2:53557 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165989s
	[INFO] 10.244.1.2:48817 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096944s
	[INFO] 10.244.1.2:46437 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001598607s
	[INFO] 10.244.1.2:37562 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170251s
	[INFO] 10.244.1.2:49910 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104308s
	[INFO] 10.244.1.2:56068 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00019488s
	[INFO] 10.244.0.3:33773 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001788s
	[INFO] 10.244.0.3:50988 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015534s
	[INFO] 10.244.0.3:32923 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086513s
	[INFO] 10.244.0.3:35251 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121138s
	[INFO] 10.244.1.2:41674 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142798s
	[INFO] 10.244.1.2:52916 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177584s
	[INFO] 10.244.1.2:37672 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170818s
	[INFO] 10.244.1.2:36877 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091381s
	[INFO] 10.244.0.3:44049 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209014s
	[INFO] 10.244.0.3:57474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141919s
	[INFO] 10.244.0.3:45582 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000067412s
	[INFO] 10.244.0.3:56382 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000067851s
	[INFO] 10.244.1.2:33931 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00017992s
	[INFO] 10.244.1.2:33361 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00024964s
	[INFO] 10.244.1.2:48270 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107161s
	[INFO] 10.244.1.2:53778 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174088s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d9942452293a11f80f22b277a2fcee01abf0e38a51bb3f6b45ddf1dc524b557c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39128 - 857 "HINFO IN 2565273504250767231.420983194396387205. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009530349s
	
	
	==> describe nodes <==
	Name:               multinode-773806
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-773806
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-773806
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T19_28_57_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:28:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-773806
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:39:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:35:14 +0000   Mon, 29 Apr 2024 19:28:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:35:14 +0000   Mon, 29 Apr 2024 19:28:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:35:14 +0000   Mon, 29 Apr 2024 19:28:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:35:14 +0000   Mon, 29 Apr 2024 19:29:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    multinode-773806
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 881b1ba426f74211885cec1846e7f341
	  System UUID:                881b1ba4-26f7-4211-885c-ec1846e7f341
	  Boot ID:                    d39b36e4-9198-4524-be10-914010bd2df8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-b9pvl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	  kube-system                 coredns-7db6d8ff4d-vdv7z                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m54s
	  kube-system                 etcd-multinode-773806                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-vdl58                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m53s
	  kube-system                 kube-apiserver-multinode-773806             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-773806    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-vfsvr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	  kube-system                 kube-scheduler-multinode-773806             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m52s                  kube-proxy       
	  Normal  Starting                 3m47s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-773806 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-773806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-773806 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m54s                  node-controller  Node multinode-773806 event: Registered Node multinode-773806 in Controller
	  Normal  NodeReady                9m52s                  kubelet          Node multinode-773806 status is now: NodeReady
	  Normal  Starting                 3m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m53s (x8 over 3m53s)  kubelet          Node multinode-773806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x8 over 3m53s)  kubelet          Node multinode-773806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x7 over 3m53s)  kubelet          Node multinode-773806 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m36s                  node-controller  Node multinode-773806 event: Registered Node multinode-773806 in Controller
	
	
	Name:               multinode-773806-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-773806-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=multinode-773806
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T19_35_57_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:35:56 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-773806-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:36:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Apr 2024 19:36:27 +0000   Mon, 29 Apr 2024 19:37:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Apr 2024 19:36:27 +0000   Mon, 29 Apr 2024 19:37:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Apr 2024 19:36:27 +0000   Mon, 29 Apr 2024 19:37:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Apr 2024 19:36:27 +0000   Mon, 29 Apr 2024 19:37:17 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    multinode-773806-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f9ab04a3503d4762af8accf5352b5723
	  System UUID:                f9ab04a3-503d-4762-af8a-ccf5352b5723
	  Boot ID:                    5c25a431-81ac-4f94-9519-b02907883f0a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qw8vg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 kindnet-cjpsn              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m16s
	  kube-system                 kube-proxy-bmfbq           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m2s                   kube-proxy       
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m17s (x2 over 9m17s)  kubelet          Node multinode-773806-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s (x2 over 9m17s)  kubelet          Node multinode-773806-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s (x2 over 9m17s)  kubelet          Node multinode-773806-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m6s                   kubelet          Node multinode-773806-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m8s)    kubelet          Node multinode-773806-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m8s)    kubelet          Node multinode-773806-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m8s)    kubelet          Node multinode-773806-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m58s                  kubelet          Node multinode-773806-m02 status is now: NodeReady
	  Normal  NodeNotReady             107s                   node-controller  Node multinode-773806-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.059061] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057718] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.179301] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.128774] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.280448] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.836895] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +0.063822] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.019398] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +1.165726] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.413637] systemd-fstab-generator[1288]: Ignoring "noauto" option for root device
	[  +0.092118] kauditd_printk_skb: 30 callbacks suppressed
	[Apr29 19:29] systemd-fstab-generator[1486]: Ignoring "noauto" option for root device
	[  +0.117730] kauditd_printk_skb: 21 callbacks suppressed
	[Apr29 19:30] kauditd_printk_skb: 84 callbacks suppressed
	[Apr29 19:35] systemd-fstab-generator[2764]: Ignoring "noauto" option for root device
	[  +0.148396] systemd-fstab-generator[2776]: Ignoring "noauto" option for root device
	[  +0.189737] systemd-fstab-generator[2790]: Ignoring "noauto" option for root device
	[  +0.144477] systemd-fstab-generator[2802]: Ignoring "noauto" option for root device
	[  +0.328715] systemd-fstab-generator[2830]: Ignoring "noauto" option for root device
	[  +0.824564] systemd-fstab-generator[2931]: Ignoring "noauto" option for root device
	[  +1.886606] systemd-fstab-generator[3056]: Ignoring "noauto" option for root device
	[  +5.726576] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.907683] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.675354] systemd-fstab-generator[3870]: Ignoring "noauto" option for root device
	[ +18.232540] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [6fb17aa0e298de35a1fc8c094e938b719e6aa7e62cad857d734cdae1b0e6247e] <==
	{"level":"info","ts":"2024-04-29T19:29:49.954476Z","caller":"traceutil/trace.go:171","msg":"trace[543404885] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"338.538913ms","start":"2024-04-29T19:29:49.615923Z","end":"2024-04-29T19:29:49.954462Z","steps":["trace[543404885] 'process raft request'  (duration: 136.496496ms)","trace[543404885] 'compare'  (duration: 200.816838ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T19:29:49.954588Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T19:29:49.615907Z","time spent":"338.63447ms","remote":"127.0.0.1:44590","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3214,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-773806-m02\" mod_revision:485 > success:<request_put:<key:\"/registry/minions/multinode-773806-m02\" value_size:3168 >> failure:<request_range:<key:\"/registry/minions/multinode-773806-m02\" > >"}
	{"level":"warn","ts":"2024-04-29T19:29:49.95427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.411976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-04-29T19:29:49.954849Z","caller":"traceutil/trace.go:171","msg":"trace[1069363287] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:487; }","duration":"160.035916ms","start":"2024-04-29T19:29:49.794797Z","end":"2024-04-29T19:29:49.954833Z","steps":["trace[1069363287] 'agreement among raft nodes before linearized reading'  (duration: 159.345504ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T19:29:49.954911Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.402519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-773806-m02\" ","response":"range_response_count:1 size:3229"}
	{"level":"info","ts":"2024-04-29T19:29:49.954959Z","caller":"traceutil/trace.go:171","msg":"trace[1492677912] range","detail":"{range_begin:/registry/minions/multinode-773806-m02; range_end:; response_count:1; response_revision:487; }","duration":"115.472448ms","start":"2024-04-29T19:29:49.839477Z","end":"2024-04-29T19:29:49.95495Z","steps":["trace[1492677912] 'agreement among raft nodes before linearized reading'  (duration: 115.334457ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T19:29:50.065403Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.66464ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-cjpsn\" ","response":"range_response_count:1 size:4934"}
	{"level":"info","ts":"2024-04-29T19:29:50.065614Z","caller":"traceutil/trace.go:171","msg":"trace[792982758] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-cjpsn; range_end:; response_count:1; response_revision:489; }","duration":"104.891716ms","start":"2024-04-29T19:29:49.960704Z","end":"2024-04-29T19:29:50.065595Z","steps":["trace[792982758] 'agreement among raft nodes before linearized reading'  (duration: 104.631661ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T19:30:35.714549Z","caller":"traceutil/trace.go:171","msg":"trace[882706862] linearizableReadLoop","detail":"{readStateIndex:621; appliedIndex:619; }","duration":"168.484754ms","start":"2024-04-29T19:30:35.546031Z","end":"2024-04-29T19:30:35.714516Z","steps":["trace[882706862] 'read index received'  (duration: 161.637675ms)","trace[882706862] 'applied index is now lower than readState.Index'  (duration: 6.846415ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T19:30:35.714775Z","caller":"traceutil/trace.go:171","msg":"trace[1624359080] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"255.344807ms","start":"2024-04-29T19:30:35.459415Z","end":"2024-04-29T19:30:35.714759Z","steps":["trace[1624359080] 'process raft request'  (duration: 248.32785ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T19:30:35.717646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.172466ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-773806-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-04-29T19:30:35.717723Z","caller":"traceutil/trace.go:171","msg":"trace[1524988239] range","detail":"{range_begin:/registry/minions/multinode-773806-m03; range_end:; response_count:1; response_revision:588; }","duration":"145.282907ms","start":"2024-04-29T19:30:35.572432Z","end":"2024-04-29T19:30:35.717715Z","steps":["trace[1524988239] 'agreement among raft nodes before linearized reading'  (duration: 145.16778ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T19:30:35.714819Z","caller":"traceutil/trace.go:171","msg":"trace[4052478] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"172.261741ms","start":"2024-04-29T19:30:35.542553Z","end":"2024-04-29T19:30:35.714815Z","steps":["trace[4052478] 'process raft request'  (duration: 171.927263ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T19:30:35.715075Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.979329ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.127\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-04-29T19:30:35.717978Z","caller":"traceutil/trace.go:171","msg":"trace[534659608] range","detail":"{range_begin:/registry/masterleases/192.168.39.127; range_end:; response_count:1; response_revision:588; }","duration":"171.99351ms","start":"2024-04-29T19:30:35.545978Z","end":"2024-04-29T19:30:35.717971Z","steps":["trace[534659608] 'agreement among raft nodes before linearized reading'  (duration: 168.85937ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T19:33:34.889809Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-29T19:33:34.889941Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-773806","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"]}
	{"level":"warn","ts":"2024-04-29T19:33:34.890029Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T19:33:34.890232Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T19:33:34.930388Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.127:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-29T19:33:34.930493Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.127:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-29T19:33:34.930875Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9dc5e8b969e9632c","current-leader-member-id":"9dc5e8b969e9632c"}
	{"level":"info","ts":"2024-04-29T19:33:34.936589Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-04-29T19:33:34.936748Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-04-29T19:33:34.936825Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-773806","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"]}
	
	
	==> etcd [dd171b7365ef28c752b6dbfa8eeb2824617f2c787b80af5ed48d968ff20b759d] <==
	{"level":"info","ts":"2024-04-29T19:35:11.419212Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:35:11.41932Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:35:11.419593Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c switched to configuration voters=(11368748717410181932)"}
	{"level":"info","ts":"2024-04-29T19:35:11.419678Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","added-peer-id":"9dc5e8b969e9632c","added-peer-peer-urls":["https://192.168.39.127:2380"]}
	{"level":"info","ts":"2024-04-29T19:35:11.419843Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"367c7cb0db09c3ab","local-member-id":"9dc5e8b969e9632c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:35:11.421276Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:35:11.434814Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T19:35:11.441423Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-04-29T19:35:11.443207Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.127:2380"}
	{"level":"info","ts":"2024-04-29T19:35:11.451326Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9dc5e8b969e9632c","initial-advertise-peer-urls":["https://192.168.39.127:2380"],"listen-peer-urls":["https://192.168.39.127:2380"],"advertise-client-urls":["https://192.168.39.127:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.127:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:35:11.451514Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T19:35:13.182731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T19:35:13.182774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T19:35:13.182818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c received MsgPreVoteResp from 9dc5e8b969e9632c at term 2"}
	{"level":"info","ts":"2024-04-29T19:35:13.182833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T19:35:13.182848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c received MsgVoteResp from 9dc5e8b969e9632c at term 3"}
	{"level":"info","ts":"2024-04-29T19:35:13.182857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc5e8b969e9632c became leader at term 3"}
	{"level":"info","ts":"2024-04-29T19:35:13.182867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9dc5e8b969e9632c elected leader 9dc5e8b969e9632c at term 3"}
	{"level":"info","ts":"2024-04-29T19:35:13.191517Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9dc5e8b969e9632c","local-member-attributes":"{Name:multinode-773806 ClientURLs:[https://192.168.39.127:2379]}","request-path":"/0/members/9dc5e8b969e9632c/attributes","cluster-id":"367c7cb0db09c3ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:35:13.191528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:35:13.191905Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T19:35:13.191948Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T19:35:13.191983Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:35:13.193861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T19:35:13.193861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.127:2379"}
	
	
	==> kernel <==
	 19:39:04 up 10 min,  0 users,  load average: 0.21, 0.42, 0.27
	Linux multinode-773806 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [19c5032fd428a94505daf9a02c2f6dfa4e448612301afe5619bb5a7d22a72a01] <==
	I0429 19:32:52.056072       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	I0429 19:33:02.071710       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:33:02.071806       1 main.go:227] handling current node
	I0429 19:33:02.071852       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:33:02.071878       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:33:02.072005       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0429 19:33:02.072026       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	I0429 19:33:12.086137       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:33:12.086247       1 main.go:227] handling current node
	I0429 19:33:12.086264       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:33:12.086271       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:33:12.086384       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0429 19:33:12.086419       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	I0429 19:33:22.098690       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:33:22.098737       1 main.go:227] handling current node
	I0429 19:33:22.098749       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:33:22.098756       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:33:22.098885       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0429 19:33:22.098916       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	I0429 19:33:32.113885       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:33:32.113934       1 main.go:227] handling current node
	I0429 19:33:32.113945       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:33:32.113951       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:33:32.114056       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0429 19:33:32.114086       1 main.go:250] Node multinode-773806-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [f44b8ef5992602486837e2ea2c56864636442ed442c246e5a5b9bb93be932e23] <==
	I0429 19:37:56.878718       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:38:06.888101       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:38:06.888289       1 main.go:227] handling current node
	I0429 19:38:06.888331       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:38:06.888352       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:38:16.894658       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:38:16.894962       1 main.go:227] handling current node
	I0429 19:38:16.894997       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:38:16.895103       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:38:26.903372       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:38:26.903461       1 main.go:227] handling current node
	I0429 19:38:26.903483       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:38:26.903501       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:38:36.918872       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:38:36.919145       1 main.go:227] handling current node
	I0429 19:38:36.919281       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:38:36.919307       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:38:46.930955       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:38:46.931071       1 main.go:227] handling current node
	I0429 19:38:46.931107       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:38:46.931127       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	I0429 19:38:56.936383       1 main.go:223] Handling node with IPs: map[192.168.39.127:{}]
	I0429 19:38:56.936473       1 main.go:227] handling current node
	I0429 19:38:56.936496       1 main.go:223] Handling node with IPs: map[192.168.39.211:{}]
	I0429 19:38:56.936518       1 main.go:250] Node multinode-773806-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [28805d1b207faff267bcbc99e9e7489549b450d304c7dafe0b10e6929602dbbe] <==
	I0429 19:33:34.912436       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 19:33:34.912514       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 19:33:34.912550       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0429 19:33:34.912745       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0429 19:33:34.913331       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0429 19:33:34.913400       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0429 19:33:34.913499       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0429 19:33:34.913529       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	W0429 19:33:34.913602       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0429 19:33:34.914489       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0429 19:33:34.915862       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0429 19:33:34.915933       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0429 19:33:34.916426       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0429 19:33:34.917339       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	W0429 19:33:34.918392       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.918630       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.922725       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.923481       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.923768       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.923928       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.924701       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.924780       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.925133       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.915021       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0429 19:33:34.925549       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [312d2cc38cb7921577370967c3e1f1355c1f3e19a6e1ebea1e5999e69c8051c0] <==
	I0429 19:35:14.632430       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 19:35:14.634749       1 aggregator.go:165] initial CRD sync complete...
	I0429 19:35:14.634862       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 19:35:14.634973       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 19:35:14.655550       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 19:35:14.655599       1 policy_source.go:224] refreshing policies
	I0429 19:35:14.655813       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 19:35:14.671071       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 19:35:14.671337       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 19:35:14.677400       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 19:35:14.682786       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0429 19:35:14.682955       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 19:35:14.683577       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 19:35:14.684477       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0429 19:35:14.690624       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0429 19:35:14.698499       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:35:14.759410       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:35:15.491915       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 19:35:17.098536       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 19:35:17.253533       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 19:35:17.296344       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 19:35:17.380715       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 19:35:17.387838       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 19:35:27.352767       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 19:35:27.426080       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [158117fc5586ddd5f255b607d0890364bb2620e5f780e3a30ca08d378dd8fe43] <==
	I0429 19:35:56.864536       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-773806-m02" podCIDRs=["10.244.1.0/24"]
	I0429 19:35:57.899466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.873µs"
	I0429 19:35:58.742642       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.852µs"
	I0429 19:35:58.789030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.567µs"
	I0429 19:35:58.798769       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.03µs"
	I0429 19:35:58.813459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.372µs"
	I0429 19:35:58.826577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.648µs"
	I0429 19:35:58.832108       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.524µs"
	I0429 19:36:06.216064       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:36:06.242696       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="459.802µs"
	I0429 19:36:06.256687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.462µs"
	I0429 19:36:09.225126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.069923ms"
	I0429 19:36:09.226462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.196µs"
	I0429 19:36:25.829616       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:36:26.980800       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:36:26.980816       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-773806-m03\" does not exist"
	I0429 19:36:26.999619       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-773806-m03" podCIDRs=["10.244.2.0/24"]
	I0429 19:36:36.574641       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:36:42.297258       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:37:17.491718       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.987553ms"
	I0429 19:37:17.492682       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="162.664µs"
	I0429 19:37:27.419073       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-rfl27"
	I0429 19:37:27.446847       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-rfl27"
	I0429 19:37:27.446928       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-p8psp"
	I0429 19:37:27.509068       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-p8psp"
	
	
	==> kube-controller-manager [bbd23693658e99e2d173c96fc024f00d96ee093071630cd01760e6f2af83d22d] <==
	I0429 19:29:48.079013       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-773806-m02\" does not exist"
	I0429 19:29:48.092457       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-773806-m02" podCIDRs=["10.244.1.0/24"]
	I0429 19:29:49.523600       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-773806-m02"
	I0429 19:29:58.486572       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:30:00.961583       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.641974ms"
	I0429 19:30:00.987973       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.220571ms"
	I0429 19:30:00.998375       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.329965ms"
	I0429 19:30:00.998513       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.812µs"
	I0429 19:30:04.826336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.298573ms"
	I0429 19:30:04.826897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.798µs"
	I0429 19:30:05.026571       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.782267ms"
	I0429 19:30:05.026834       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="96.567µs"
	I0429 19:30:35.718088       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:30:35.719810       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-773806-m03\" does not exist"
	I0429 19:30:35.735009       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-773806-m03" podCIDRs=["10.244.2.0/24"]
	I0429 19:30:39.545973       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-773806-m03"
	I0429 19:30:45.816427       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:31:17.540512       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:31:18.616392       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:31:18.616572       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-773806-m03\" does not exist"
	I0429 19:31:18.654272       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-773806-m03" podCIDRs=["10.244.3.0/24"]
	I0429 19:31:28.256819       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:32:09.595354       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-773806-m02"
	I0429 19:32:14.696698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.635458ms"
	I0429 19:32:14.698456       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.418µs"
	
	
	==> kube-proxy [305781b9713c9451f0b5e6d409fed619b9db19166f5a866d809416862582eb35] <==
	I0429 19:29:11.037839       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:29:11.062950       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.127"]
	I0429 19:29:11.155698       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:29:11.155726       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:29:11.155741       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:29:11.159917       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:29:11.160444       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:29:11.160636       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:29:11.161960       1 config.go:192] "Starting service config controller"
	I0429 19:29:11.162070       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:29:11.162823       1 config.go:319] "Starting node config controller"
	I0429 19:29:11.163058       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:29:11.163090       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:29:11.165705       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:29:11.263949       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:29:11.264006       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:29:11.265834       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [a23b20c1a1888c715e25c28dfd27a4f61f8d433f9e836b9c39c6ca7f3ca0e7e8] <==
	I0429 19:35:16.067080       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:35:16.088253       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.127"]
	I0429 19:35:16.228297       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:35:16.228359       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:35:16.228377       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:35:16.234532       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:35:16.234730       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:35:16.234775       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:35:16.236289       1 config.go:192] "Starting service config controller"
	I0429 19:35:16.236373       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:35:16.236428       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:35:16.236433       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:35:16.236838       1 config.go:319] "Starting node config controller"
	I0429 19:35:16.237112       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:35:16.336970       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:35:16.337105       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:35:16.337370       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [27f524cad554a80a5d6a27ba6563ea8c8f621a795a1c50623338c8fe8a4115da] <==
	I0429 19:35:14.604701       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 19:35:14.604864       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:35:14.613727       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 19:35:14.616316       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 19:35:14.617218       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 19:35:14.617281       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0429 19:35:14.633501       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 19:35:14.633564       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 19:35:14.633677       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:35:14.633715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:35:14.633758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 19:35:14.633766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 19:35:14.633821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 19:35:14.633857       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 19:35:14.636481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:35:14.636528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:35:14.636584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 19:35:14.636622       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 19:35:14.636671       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 19:35:14.636681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 19:35:14.636717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 19:35:14.636754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 19:35:14.636786       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 19:35:14.636821       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0429 19:35:14.717242       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e81cb921a76b29849629ccbc48f25fb112e8d9afbb11ff2ba64c72ef9b92f2e7] <==
	E0429 19:28:53.846992       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 19:28:53.847834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:28:53.847873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 19:28:53.847884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:28:53.848047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 19:28:54.671030       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 19:28:54.671095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 19:28:54.725886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 19:28:54.725954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 19:28:54.782936       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 19:28:54.783067       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 19:28:54.790565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:28:54.790658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:28:54.879863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 19:28:54.880068       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 19:28:54.901050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:28:54.901141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:28:55.127613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 19:28:55.127867       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 19:28:55.150265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 19:28:55.150439       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 19:28:55.177683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 19:28:55.179448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0429 19:28:57.638542       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0429 19:33:34.882425       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.993573    3064 topology_manager.go:215] "Topology Admit Handler" podUID="ca6e7675-8035-4977-9d13-512c5d336ba7" podNamespace="kube-system" podName="kube-proxy-vfsvr"
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.993654    3064 topology_manager.go:215] "Topology Admit Handler" podUID="a28cf547-261c-4662-bd9c-4966ca3cdfd1" podNamespace="kube-system" podName="storage-provisioner"
	Apr 29 19:35:14 multinode-773806 kubelet[3064]: I0429 19:35:14.993722    3064 topology_manager.go:215] "Topology Admit Handler" podUID="c4e08525-845b-423c-8481-20addac1f5e7" podNamespace="default" podName="busybox-fc5497c4f-b9pvl"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.006923    3064 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.086482    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca6e7675-8035-4977-9d13-512c5d336ba7-xtables-lock\") pod \"kube-proxy-vfsvr\" (UID: \"ca6e7675-8035-4977-9d13-512c5d336ba7\") " pod="kube-system/kube-proxy-vfsvr"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.086612    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a28cf547-261c-4662-bd9c-4966ca3cdfd1-tmp\") pod \"storage-provisioner\" (UID: \"a28cf547-261c-4662-bd9c-4966ca3cdfd1\") " pod="kube-system/storage-provisioner"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.086678    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6f195859-a11d-4707-b0e8-92b7164c397d-cni-cfg\") pod \"kindnet-vdl58\" (UID: \"6f195859-a11d-4707-b0e8-92b7164c397d\") " pod="kube-system/kindnet-vdl58"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.086755    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f195859-a11d-4707-b0e8-92b7164c397d-xtables-lock\") pod \"kindnet-vdl58\" (UID: \"6f195859-a11d-4707-b0e8-92b7164c397d\") " pod="kube-system/kindnet-vdl58"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.086839    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f195859-a11d-4707-b0e8-92b7164c397d-lib-modules\") pod \"kindnet-vdl58\" (UID: \"6f195859-a11d-4707-b0e8-92b7164c397d\") " pod="kube-system/kindnet-vdl58"
	Apr 29 19:35:15 multinode-773806 kubelet[3064]: I0429 19:35:15.086973    3064 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca6e7675-8035-4977-9d13-512c5d336ba7-lib-modules\") pod \"kube-proxy-vfsvr\" (UID: \"ca6e7675-8035-4977-9d13-512c5d336ba7\") " pod="kube-system/kube-proxy-vfsvr"
	Apr 29 19:36:10 multinode-773806 kubelet[3064]: E0429 19:36:10.116518    3064 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:36:10 multinode-773806 kubelet[3064]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:36:10 multinode-773806 kubelet[3064]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:36:10 multinode-773806 kubelet[3064]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:36:10 multinode-773806 kubelet[3064]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:37:10 multinode-773806 kubelet[3064]: E0429 19:37:10.110684    3064 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:37:10 multinode-773806 kubelet[3064]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:37:10 multinode-773806 kubelet[3064]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:37:10 multinode-773806 kubelet[3064]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:37:10 multinode-773806 kubelet[3064]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:38:10 multinode-773806 kubelet[3064]: E0429 19:38:10.119375    3064 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:38:10 multinode-773806 kubelet[3064]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:38:10 multinode-773806 kubelet[3064]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:38:10 multinode-773806 kubelet[3064]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:38:10 multinode-773806 kubelet[3064]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 19:39:03.144896   51064 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18774-7754/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-773806 -n multinode-773806
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-773806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.53s)

                                                
                                    
x
+
TestPreload (267.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-031254 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0429 19:43:43.951570   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 19:44:00.893836   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-031254 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m3.260069564s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-031254 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-031254 image pull gcr.io/k8s-minikube/busybox: (2.985839892s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-031254
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-031254: (7.317258848s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-031254 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-031254 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m10.839032588s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-031254 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-04-29 19:47:18.476658578 +0000 UTC m=+4088.124033700
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-031254 -n test-preload-031254
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-031254 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-031254 logs -n 25: (1.198981222s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n multinode-773806 sudo cat                                       | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /home/docker/cp-test_multinode-773806-m03_multinode-773806.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-773806 cp multinode-773806-m03:/home/docker/cp-test.txt                       | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m02:/home/docker/cp-test_multinode-773806-m03_multinode-773806-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n                                                                 | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | multinode-773806-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-773806 ssh -n multinode-773806-m02 sudo cat                                   | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	|         | /home/docker/cp-test_multinode-773806-m03_multinode-773806-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-773806 node stop m03                                                          | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:30 UTC |
	| node    | multinode-773806 node start                                                             | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:30 UTC | 29 Apr 24 19:31 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-773806                                                                | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:31 UTC |                     |
	| stop    | -p multinode-773806                                                                     | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:31 UTC |                     |
	| start   | -p multinode-773806                                                                     | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:33 UTC | 29 Apr 24 19:36 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-773806                                                                | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:36 UTC |                     |
	| node    | multinode-773806 node delete                                                            | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:36 UTC | 29 Apr 24 19:36 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-773806 stop                                                                   | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:36 UTC |                     |
	| start   | -p multinode-773806                                                                     | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:39 UTC | 29 Apr 24 19:42 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-773806                                                                | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:42 UTC |                     |
	| start   | -p multinode-773806-m02                                                                 | multinode-773806-m02 | jenkins | v1.33.0 | 29 Apr 24 19:42 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-773806-m03                                                                 | multinode-773806-m03 | jenkins | v1.33.0 | 29 Apr 24 19:42 UTC | 29 Apr 24 19:42 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-773806                                                                 | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:42 UTC |                     |
	| delete  | -p multinode-773806-m03                                                                 | multinode-773806-m03 | jenkins | v1.33.0 | 29 Apr 24 19:42 UTC | 29 Apr 24 19:42 UTC |
	| delete  | -p multinode-773806                                                                     | multinode-773806     | jenkins | v1.33.0 | 29 Apr 24 19:42 UTC | 29 Apr 24 19:42 UTC |
	| start   | -p test-preload-031254                                                                  | test-preload-031254  | jenkins | v1.33.0 | 29 Apr 24 19:42 UTC | 29 Apr 24 19:45 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-031254 image pull                                                          | test-preload-031254  | jenkins | v1.33.0 | 29 Apr 24 19:45 UTC | 29 Apr 24 19:46 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-031254                                                                  | test-preload-031254  | jenkins | v1.33.0 | 29 Apr 24 19:46 UTC | 29 Apr 24 19:46 UTC |
	| start   | -p test-preload-031254                                                                  | test-preload-031254  | jenkins | v1.33.0 | 29 Apr 24 19:46 UTC | 29 Apr 24 19:47 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-031254 image list                                                          | test-preload-031254  | jenkins | v1.33.0 | 29 Apr 24 19:47 UTC | 29 Apr 24 19:47 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 19:46:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 19:46:07.458592   53769 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:46:07.458699   53769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:46:07.458706   53769 out.go:304] Setting ErrFile to fd 2...
	I0429 19:46:07.458712   53769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:46:07.458928   53769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:46:07.459509   53769 out.go:298] Setting JSON to false
	I0429 19:46:07.460404   53769 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5265,"bootTime":1714414702,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 19:46:07.460466   53769 start.go:139] virtualization: kvm guest
	I0429 19:46:07.462899   53769 out.go:177] * [test-preload-031254] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 19:46:07.464360   53769 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:46:07.464360   53769 notify.go:220] Checking for updates...
	I0429 19:46:07.465802   53769 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:46:07.467224   53769 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:46:07.468548   53769 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:46:07.469748   53769 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 19:46:07.470874   53769 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:46:07.472336   53769 config.go:182] Loaded profile config "test-preload-031254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0429 19:46:07.472735   53769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:46:07.472773   53769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:46:07.487631   53769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36703
	I0429 19:46:07.488100   53769 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:46:07.488620   53769 main.go:141] libmachine: Using API Version  1
	I0429 19:46:07.488643   53769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:46:07.489017   53769 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:46:07.489198   53769 main.go:141] libmachine: (test-preload-031254) Calling .DriverName
	I0429 19:46:07.490827   53769 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0429 19:46:07.491931   53769 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:46:07.492229   53769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:46:07.492267   53769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:46:07.506829   53769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0429 19:46:07.507248   53769 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:46:07.507744   53769 main.go:141] libmachine: Using API Version  1
	I0429 19:46:07.507775   53769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:46:07.508058   53769 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:46:07.508238   53769 main.go:141] libmachine: (test-preload-031254) Calling .DriverName
	I0429 19:46:07.542289   53769 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 19:46:07.543477   53769 start.go:297] selected driver: kvm2
	I0429 19:46:07.543492   53769 start.go:901] validating driver "kvm2" against &{Name:test-preload-031254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-031254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:46:07.543598   53769 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:46:07.544528   53769 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:46:07.544615   53769 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 19:46:07.559162   53769 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 19:46:07.559449   53769 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:46:07.559510   53769 cni.go:84] Creating CNI manager for ""
	I0429 19:46:07.559527   53769 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:46:07.559580   53769 start.go:340] cluster config:
	{Name:test-preload-031254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-031254 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:46:07.559669   53769 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:46:07.561856   53769 out.go:177] * Starting "test-preload-031254" primary control-plane node in "test-preload-031254" cluster
	I0429 19:46:07.562917   53769 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0429 19:46:07.693415   53769 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0429 19:46:07.693443   53769 cache.go:56] Caching tarball of preloaded images
	I0429 19:46:07.693601   53769 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0429 19:46:07.695329   53769 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0429 19:46:07.696752   53769 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0429 19:46:07.806196   53769 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0429 19:46:20.272468   53769 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0429 19:46:20.272591   53769 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0429 19:46:21.110357   53769 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0429 19:46:21.110513   53769 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/config.json ...
	I0429 19:46:21.110767   53769 start.go:360] acquireMachinesLock for test-preload-031254: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:46:21.110846   53769 start.go:364] duration metric: took 55.94µs to acquireMachinesLock for "test-preload-031254"
	I0429 19:46:21.110867   53769 start.go:96] Skipping create...Using existing machine configuration
	I0429 19:46:21.110875   53769 fix.go:54] fixHost starting: 
	I0429 19:46:21.111197   53769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:46:21.111241   53769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:46:21.127062   53769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44243
	I0429 19:46:21.127507   53769 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:46:21.128034   53769 main.go:141] libmachine: Using API Version  1
	I0429 19:46:21.128061   53769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:46:21.128435   53769 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:46:21.128610   53769 main.go:141] libmachine: (test-preload-031254) Calling .DriverName
	I0429 19:46:21.128741   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetState
	I0429 19:46:21.130420   53769 fix.go:112] recreateIfNeeded on test-preload-031254: state=Stopped err=<nil>
	I0429 19:46:21.130444   53769 main.go:141] libmachine: (test-preload-031254) Calling .DriverName
	W0429 19:46:21.130614   53769 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 19:46:21.132969   53769 out.go:177] * Restarting existing kvm2 VM for "test-preload-031254" ...
	I0429 19:46:21.134541   53769 main.go:141] libmachine: (test-preload-031254) Calling .Start
	I0429 19:46:21.134722   53769 main.go:141] libmachine: (test-preload-031254) Ensuring networks are active...
	I0429 19:46:21.135601   53769 main.go:141] libmachine: (test-preload-031254) Ensuring network default is active
	I0429 19:46:21.135958   53769 main.go:141] libmachine: (test-preload-031254) Ensuring network mk-test-preload-031254 is active
	I0429 19:46:21.136308   53769 main.go:141] libmachine: (test-preload-031254) Getting domain xml...
	I0429 19:46:21.137000   53769 main.go:141] libmachine: (test-preload-031254) Creating domain...
	I0429 19:46:22.327543   53769 main.go:141] libmachine: (test-preload-031254) Waiting to get IP...
	I0429 19:46:22.328316   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:22.328688   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:22.328747   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:22.328674   53852 retry.go:31] will retry after 287.311512ms: waiting for machine to come up
	I0429 19:46:22.617249   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:22.617687   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:22.617708   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:22.617624   53852 retry.go:31] will retry after 381.638218ms: waiting for machine to come up
	I0429 19:46:23.001086   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:23.001445   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:23.001482   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:23.001392   53852 retry.go:31] will retry after 413.916593ms: waiting for machine to come up
	I0429 19:46:23.416870   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:23.417297   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:23.417317   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:23.417271   53852 retry.go:31] will retry after 405.66507ms: waiting for machine to come up
	I0429 19:46:23.824846   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:23.825205   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:23.825239   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:23.825141   53852 retry.go:31] will retry after 718.947901ms: waiting for machine to come up
	I0429 19:46:24.545980   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:24.546379   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:24.546408   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:24.546339   53852 retry.go:31] will retry after 749.723304ms: waiting for machine to come up
	I0429 19:46:25.297439   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:25.297996   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:25.298026   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:25.297939   53852 retry.go:31] will retry after 1.191528816s: waiting for machine to come up
	I0429 19:46:26.490602   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:26.491040   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:26.491069   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:26.490994   53852 retry.go:31] will retry after 1.351065583s: waiting for machine to come up
	I0429 19:46:27.844401   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:27.844784   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:27.844814   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:27.844733   53852 retry.go:31] will retry after 1.769627309s: waiting for machine to come up
	I0429 19:46:29.616627   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:29.617034   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:29.617060   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:29.616980   53852 retry.go:31] will retry after 1.785344508s: waiting for machine to come up
	I0429 19:46:31.404384   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:31.404784   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:31.404813   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:31.404737   53852 retry.go:31] will retry after 2.052287612s: waiting for machine to come up
	I0429 19:46:33.459307   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:33.459816   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:33.459848   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:33.459763   53852 retry.go:31] will retry after 2.951840797s: waiting for machine to come up
	I0429 19:46:36.412772   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:36.413117   53769 main.go:141] libmachine: (test-preload-031254) DBG | unable to find current IP address of domain test-preload-031254 in network mk-test-preload-031254
	I0429 19:46:36.413147   53769 main.go:141] libmachine: (test-preload-031254) DBG | I0429 19:46:36.413067   53852 retry.go:31] will retry after 4.315281731s: waiting for machine to come up
	I0429 19:46:40.731163   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:40.731620   53769 main.go:141] libmachine: (test-preload-031254) Found IP for machine: 192.168.39.46
	I0429 19:46:40.731654   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has current primary IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:40.731664   53769 main.go:141] libmachine: (test-preload-031254) Reserving static IP address...
	I0429 19:46:40.732025   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "test-preload-031254", mac: "52:54:00:b7:5a:2d", ip: "192.168.39.46"} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:40.732051   53769 main.go:141] libmachine: (test-preload-031254) DBG | skip adding static IP to network mk-test-preload-031254 - found existing host DHCP lease matching {name: "test-preload-031254", mac: "52:54:00:b7:5a:2d", ip: "192.168.39.46"}
	I0429 19:46:40.732063   53769 main.go:141] libmachine: (test-preload-031254) Reserved static IP address: 192.168.39.46
	I0429 19:46:40.732078   53769 main.go:141] libmachine: (test-preload-031254) Waiting for SSH to be available...
	I0429 19:46:40.732090   53769 main.go:141] libmachine: (test-preload-031254) DBG | Getting to WaitForSSH function...
	I0429 19:46:40.734256   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:40.734566   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:40.734596   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:40.734758   53769 main.go:141] libmachine: (test-preload-031254) DBG | Using SSH client type: external
	I0429 19:46:40.734788   53769 main.go:141] libmachine: (test-preload-031254) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/test-preload-031254/id_rsa (-rw-------)
	I0429 19:46:40.734823   53769 main.go:141] libmachine: (test-preload-031254) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.46 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/test-preload-031254/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:46:40.734836   53769 main.go:141] libmachine: (test-preload-031254) DBG | About to run SSH command:
	I0429 19:46:40.734852   53769 main.go:141] libmachine: (test-preload-031254) DBG | exit 0
	I0429 19:46:40.853969   53769 main.go:141] libmachine: (test-preload-031254) DBG | SSH cmd err, output: <nil>: 
	I0429 19:46:40.854371   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetConfigRaw
	I0429 19:46:40.854988   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetIP
	I0429 19:46:40.857342   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:40.857659   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:40.857690   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:40.857887   53769 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/config.json ...
	I0429 19:46:40.858096   53769 machine.go:94] provisionDockerMachine start ...
	I0429 19:46:40.858117   53769 main.go:141] libmachine: (test-preload-031254) Calling .DriverName
	I0429 19:46:40.858359   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHHostname
	I0429 19:46:40.860500   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:40.860803   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:40.860822   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:40.860981   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHPort
	I0429 19:46:40.861177   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:40.861327   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:40.861453   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHUsername
	I0429 19:46:40.861587   53769 main.go:141] libmachine: Using SSH client type: native
	I0429 19:46:40.861760   53769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0429 19:46:40.861771   53769 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:46:40.962537   53769 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 19:46:40.962565   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetMachineName
	I0429 19:46:40.962811   53769 buildroot.go:166] provisioning hostname "test-preload-031254"
	I0429 19:46:40.962835   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetMachineName
	I0429 19:46:40.963028   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHHostname
	I0429 19:46:40.965536   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:40.965865   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:40.965894   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:40.966031   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHPort
	I0429 19:46:40.966255   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:40.966422   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:40.966572   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHUsername
	I0429 19:46:40.966764   53769 main.go:141] libmachine: Using SSH client type: native
	I0429 19:46:40.966930   53769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0429 19:46:40.966945   53769 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-031254 && echo "test-preload-031254" | sudo tee /etc/hostname
	I0429 19:46:41.082048   53769 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-031254
	
	I0429 19:46:41.082094   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHHostname
	I0429 19:46:41.084707   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.085036   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:41.085067   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.085261   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHPort
	I0429 19:46:41.085447   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:41.085625   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:41.085733   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHUsername
	I0429 19:46:41.085878   53769 main.go:141] libmachine: Using SSH client type: native
	I0429 19:46:41.086031   53769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0429 19:46:41.086051   53769 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-031254' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-031254/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-031254' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:46:41.195626   53769 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:46:41.195655   53769 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:46:41.195671   53769 buildroot.go:174] setting up certificates
	I0429 19:46:41.195679   53769 provision.go:84] configureAuth start
	I0429 19:46:41.195687   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetMachineName
	I0429 19:46:41.195986   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetIP
	I0429 19:46:41.198588   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.198944   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:41.198972   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.199131   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHHostname
	I0429 19:46:41.201278   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.201610   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:41.201635   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.201779   53769 provision.go:143] copyHostCerts
	I0429 19:46:41.201829   53769 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:46:41.201841   53769 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:46:41.201924   53769 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:46:41.202027   53769 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:46:41.202038   53769 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:46:41.202083   53769 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:46:41.202155   53769 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:46:41.202165   53769 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:46:41.202200   53769 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:46:41.202278   53769 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.test-preload-031254 san=[127.0.0.1 192.168.39.46 localhost minikube test-preload-031254]
	I0429 19:46:41.300670   53769 provision.go:177] copyRemoteCerts
	I0429 19:46:41.300725   53769 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:46:41.300757   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHHostname
	I0429 19:46:41.303415   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.303688   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:41.303709   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.303868   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHPort
	I0429 19:46:41.304061   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:41.304215   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHUsername
	I0429 19:46:41.304312   53769 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/test-preload-031254/id_rsa Username:docker}
	I0429 19:46:41.385551   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:46:41.411550   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0429 19:46:41.437707   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 19:46:41.463058   53769 provision.go:87] duration metric: took 267.369341ms to configureAuth
	I0429 19:46:41.463085   53769 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:46:41.463271   53769 config.go:182] Loaded profile config "test-preload-031254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0429 19:46:41.463358   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHHostname
	I0429 19:46:41.465898   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.466191   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:41.466219   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.466374   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHPort
	I0429 19:46:41.466574   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:41.466731   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:41.466884   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHUsername
	I0429 19:46:41.467058   53769 main.go:141] libmachine: Using SSH client type: native
	I0429 19:46:41.467273   53769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0429 19:46:41.467301   53769 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:46:41.746641   53769 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:46:41.746672   53769 machine.go:97] duration metric: took 888.562103ms to provisionDockerMachine
	I0429 19:46:41.746684   53769 start.go:293] postStartSetup for "test-preload-031254" (driver="kvm2")
	I0429 19:46:41.746694   53769 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:46:41.746708   53769 main.go:141] libmachine: (test-preload-031254) Calling .DriverName
	I0429 19:46:41.747012   53769 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:46:41.747042   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHHostname
	I0429 19:46:41.749589   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.749972   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:41.750005   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.750103   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHPort
	I0429 19:46:41.750280   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:41.750436   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHUsername
	I0429 19:46:41.750556   53769 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/test-preload-031254/id_rsa Username:docker}
	I0429 19:46:41.830155   53769 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:46:41.834842   53769 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:46:41.834864   53769 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:46:41.834932   53769 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:46:41.835034   53769 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:46:41.835171   53769 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:46:41.845944   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:46:41.872329   53769 start.go:296] duration metric: took 125.627274ms for postStartSetup
	I0429 19:46:41.872363   53769 fix.go:56] duration metric: took 20.761489092s for fixHost
	I0429 19:46:41.872384   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHHostname
	I0429 19:46:41.874917   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.875266   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:41.875292   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.875443   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHPort
	I0429 19:46:41.875657   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:41.875806   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:41.875968   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHUsername
	I0429 19:46:41.876139   53769 main.go:141] libmachine: Using SSH client type: native
	I0429 19:46:41.876312   53769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0429 19:46:41.876324   53769 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:46:41.975521   53769 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714420001.945667648
	
	I0429 19:46:41.975570   53769 fix.go:216] guest clock: 1714420001.945667648
	I0429 19:46:41.975580   53769 fix.go:229] Guest: 2024-04-29 19:46:41.945667648 +0000 UTC Remote: 2024-04-29 19:46:41.872366496 +0000 UTC m=+34.460638622 (delta=73.301152ms)
	I0429 19:46:41.975609   53769 fix.go:200] guest clock delta is within tolerance: 73.301152ms
	I0429 19:46:41.975625   53769 start.go:83] releasing machines lock for "test-preload-031254", held for 20.864760721s
	I0429 19:46:41.975652   53769 main.go:141] libmachine: (test-preload-031254) Calling .DriverName
	I0429 19:46:41.975911   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetIP
	I0429 19:46:41.978610   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.979010   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:41.979037   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.979173   53769 main.go:141] libmachine: (test-preload-031254) Calling .DriverName
	I0429 19:46:41.979652   53769 main.go:141] libmachine: (test-preload-031254) Calling .DriverName
	I0429 19:46:41.979849   53769 main.go:141] libmachine: (test-preload-031254) Calling .DriverName
	I0429 19:46:41.979919   53769 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:46:41.979960   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHHostname
	I0429 19:46:41.980062   53769 ssh_runner.go:195] Run: cat /version.json
	I0429 19:46:41.980087   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHHostname
	I0429 19:46:41.982611   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.982884   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:41.982915   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.982935   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.983106   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHPort
	I0429 19:46:41.983286   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:41.983426   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHUsername
	I0429 19:46:41.983466   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:41.983495   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:41.983574   53769 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/test-preload-031254/id_rsa Username:docker}
	I0429 19:46:41.983657   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHPort
	I0429 19:46:41.983801   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:46:41.983976   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHUsername
	I0429 19:46:41.984108   53769 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/test-preload-031254/id_rsa Username:docker}
	I0429 19:46:42.080526   53769 ssh_runner.go:195] Run: systemctl --version
	I0429 19:46:42.086968   53769 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:46:42.232346   53769 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:46:42.239368   53769 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:46:42.239447   53769 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:46:42.257677   53769 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:46:42.257700   53769 start.go:494] detecting cgroup driver to use...
	I0429 19:46:42.257760   53769 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:46:42.275511   53769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:46:42.290879   53769 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:46:42.290933   53769 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:46:42.305962   53769 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:46:42.321111   53769 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:46:42.443983   53769 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:46:42.598489   53769 docker.go:233] disabling docker service ...
	I0429 19:46:42.598549   53769 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:46:42.614608   53769 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:46:42.628715   53769 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:46:42.765773   53769 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:46:42.893170   53769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:46:42.908815   53769 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:46:42.929173   53769 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0429 19:46:42.929243   53769 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:46:42.939971   53769 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:46:42.940029   53769 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:46:42.950655   53769 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:46:42.961222   53769 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:46:42.971923   53769 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:46:42.982678   53769 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:46:42.994385   53769 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:46:43.012797   53769 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:46:43.023546   53769 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:46:43.032927   53769 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 19:46:43.032967   53769 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 19:46:43.047191   53769 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:46:43.057304   53769 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:46:43.175931   53769 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:46:43.326052   53769 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:46:43.326140   53769 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:46:43.332044   53769 start.go:562] Will wait 60s for crictl version
	I0429 19:46:43.332095   53769 ssh_runner.go:195] Run: which crictl
	I0429 19:46:43.336318   53769 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:46:43.377022   53769 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:46:43.377107   53769 ssh_runner.go:195] Run: crio --version
	I0429 19:46:43.406567   53769 ssh_runner.go:195] Run: crio --version
	I0429 19:46:43.439257   53769 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0429 19:46:43.440901   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetIP
	I0429 19:46:43.443564   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:43.443887   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:46:43.443916   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:46:43.444076   53769 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 19:46:43.448597   53769 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:46:43.463277   53769 kubeadm.go:877] updating cluster {Name:test-preload-031254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-031254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 19:46:43.463428   53769 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0429 19:46:43.463473   53769 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:46:43.506271   53769 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0429 19:46:43.506324   53769 ssh_runner.go:195] Run: which lz4
	I0429 19:46:43.510778   53769 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 19:46:43.515315   53769 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 19:46:43.515350   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0429 19:46:45.365889   53769 crio.go:462] duration metric: took 1.85515395s to copy over tarball
	I0429 19:46:45.365963   53769 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 19:46:48.130583   53769 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.764595573s)
	I0429 19:46:48.130618   53769 crio.go:469] duration metric: took 2.764702553s to extract the tarball
	I0429 19:46:48.130628   53769 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 19:46:48.174212   53769 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:46:48.221732   53769 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0429 19:46:48.221755   53769 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 19:46:48.221826   53769 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:46:48.221847   53769 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0429 19:46:48.221866   53769 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 19:46:48.221884   53769 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0429 19:46:48.221831   53769 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0429 19:46:48.221893   53769 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0429 19:46:48.221866   53769 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0429 19:46:48.221831   53769 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0429 19:46:48.223436   53769 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0429 19:46:48.223465   53769 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 19:46:48.223470   53769 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:46:48.223442   53769 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0429 19:46:48.223489   53769 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0429 19:46:48.223501   53769 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0429 19:46:48.223512   53769 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0429 19:46:48.223814   53769 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0429 19:46:48.389328   53769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0429 19:46:48.417414   53769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0429 19:46:48.437554   53769 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0429 19:46:48.437616   53769 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0429 19:46:48.437669   53769 ssh_runner.go:195] Run: which crictl
	I0429 19:46:48.463587   53769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0429 19:46:48.470853   53769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0429 19:46:48.471064   53769 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0429 19:46:48.471103   53769 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0429 19:46:48.471172   53769 ssh_runner.go:195] Run: which crictl
	I0429 19:46:48.531398   53769 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0429 19:46:48.531445   53769 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0429 19:46:48.531479   53769 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0429 19:46:48.531493   53769 ssh_runner.go:195] Run: which crictl
	I0429 19:46:48.531530   53769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0429 19:46:48.531575   53769 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0429 19:46:48.536337   53769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0429 19:46:48.581017   53769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0429 19:46:48.582713   53769 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0429 19:46:48.582760   53769 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0429 19:46:48.582774   53769 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0429 19:46:48.582803   53769 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0429 19:46:48.582824   53769 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0429 19:46:48.592286   53769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0429 19:46:48.593540   53769 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0429 19:46:48.593637   53769 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0429 19:46:48.599665   53769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0429 19:46:48.621733   53769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0429 19:46:48.712621   53769 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0429 19:46:48.712749   53769 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0429 19:46:48.712789   53769 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0429 19:46:48.712844   53769 ssh_runner.go:195] Run: which crictl
	I0429 19:46:49.128495   53769 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:46:51.039074   53769 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.456218028s)
	I0429 19:46:51.039101   53769 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0429 19:46:51.039122   53769 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0429 19:46:51.039160   53769 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4: (2.446841395s)
	I0429 19:46:51.039209   53769 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0429 19:46:51.039227   53769 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (2.445570278s)
	I0429 19:46:51.039171   53769 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0429 19:46:51.039287   53769 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4: (2.439603358s)
	I0429 19:46:51.039243   53769 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0429 19:46:51.039321   53769 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0429 19:46:51.039338   53769 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0429 19:46:51.039352   53769 ssh_runner.go:195] Run: which crictl
	I0429 19:46:51.039376   53769 ssh_runner.go:195] Run: which crictl
	I0429 19:46:51.039430   53769 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0: (2.417674751s)
	I0429 19:46:51.039255   53769 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0429 19:46:51.039466   53769 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0429 19:46:51.039479   53769 ssh_runner.go:235] Completed: which crictl: (2.326621543s)
	I0429 19:46:51.039492   53769 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0429 19:46:51.039515   53769 ssh_runner.go:195] Run: which crictl
	I0429 19:46:51.039530   53769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0429 19:46:51.039528   53769 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.911007477s)
	I0429 19:46:51.050521   53769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0429 19:46:51.050600   53769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0429 19:46:51.941355   53769 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0429 19:46:51.941394   53769 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0429 19:46:51.941438   53769 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0429 19:46:51.941519   53769 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0429 19:46:51.941551   53769 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0429 19:46:51.941619   53769 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0429 19:46:51.941653   53769 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0429 19:46:51.941674   53769 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0429 19:46:51.941708   53769 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0429 19:46:51.941743   53769 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0429 19:46:52.108402   53769 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0429 19:46:52.108503   53769 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0429 19:46:52.108554   53769 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0429 19:46:52.108576   53769 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0429 19:46:52.108577   53769 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0429 19:46:52.108614   53769 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0429 19:46:52.108625   53769 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0429 19:46:52.108642   53769 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0429 19:46:52.862965   53769 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0429 19:46:52.863029   53769 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0429 19:46:52.863067   53769 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0429 19:46:52.863179   53769 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0429 19:46:53.311442   53769 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0429 19:46:53.311481   53769 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0429 19:46:53.311532   53769 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0429 19:46:53.759428   53769 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0429 19:46:53.759481   53769 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0429 19:46:53.759552   53769 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0429 19:46:56.013841   53769 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.254261184s)
	I0429 19:46:56.013873   53769 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0429 19:46:56.013899   53769 cache_images.go:123] Successfully loaded all cached images
	I0429 19:46:56.013903   53769 cache_images.go:92] duration metric: took 7.792137815s to LoadCachedImages
	I0429 19:46:56.013912   53769 kubeadm.go:928] updating node { 192.168.39.46 8443 v1.24.4 crio true true} ...
	I0429 19:46:56.014034   53769 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-031254 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-031254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:46:56.014136   53769 ssh_runner.go:195] Run: crio config
	I0429 19:46:56.071651   53769 cni.go:84] Creating CNI manager for ""
	I0429 19:46:56.071680   53769 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:46:56.071697   53769 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 19:46:56.071721   53769 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.46 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-031254 NodeName:test-preload-031254 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.46"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 19:46:56.071877   53769 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-031254"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.46
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.46"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 19:46:56.071948   53769 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0429 19:46:56.082903   53769 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 19:46:56.082986   53769 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 19:46:56.092977   53769 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0429 19:46:56.112535   53769 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:46:56.131063   53769 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0429 19:46:56.149848   53769 ssh_runner.go:195] Run: grep 192.168.39.46	control-plane.minikube.internal$ /etc/hosts
	I0429 19:46:56.154433   53769 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.46	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:46:56.168493   53769 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:46:56.295689   53769 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:46:56.312859   53769 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254 for IP: 192.168.39.46
	I0429 19:46:56.312885   53769 certs.go:194] generating shared ca certs ...
	I0429 19:46:56.312899   53769 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:46:56.313058   53769 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:46:56.313099   53769 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:46:56.313109   53769 certs.go:256] generating profile certs ...
	I0429 19:46:56.313182   53769 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/client.key
	I0429 19:46:56.313237   53769 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/apiserver.key.42869e59
	I0429 19:46:56.313270   53769 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/proxy-client.key
	I0429 19:46:56.313373   53769 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:46:56.313407   53769 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:46:56.313416   53769 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:46:56.313437   53769 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:46:56.313458   53769 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:46:56.313480   53769 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:46:56.313516   53769 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:46:56.314180   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:46:56.367768   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:46:56.398862   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:46:56.439569   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:46:56.474354   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0429 19:46:56.509998   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 19:46:56.535858   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:46:56.562341   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 19:46:56.588261   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:46:56.613375   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:46:56.638645   53769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:46:56.665313   53769 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 19:46:56.684804   53769 ssh_runner.go:195] Run: openssl version
	I0429 19:46:56.691325   53769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:46:56.705117   53769 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:46:56.710721   53769 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:46:56.710786   53769 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:46:56.717546   53769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:46:56.730381   53769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:46:56.742749   53769 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:46:56.748142   53769 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:46:56.748200   53769 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:46:56.754606   53769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:46:56.767462   53769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:46:56.780255   53769 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:46:56.785554   53769 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:46:56.785622   53769 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:46:56.792533   53769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:46:56.805725   53769 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:46:56.811102   53769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 19:46:56.817734   53769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 19:46:56.824283   53769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 19:46:56.830832   53769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 19:46:56.837113   53769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 19:46:56.843485   53769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 19:46:56.849987   53769 kubeadm.go:391] StartCluster: {Name:test-preload-031254 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-031254 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:46:56.850109   53769 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 19:46:56.850163   53769 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 19:46:56.889029   53769 cri.go:89] found id: ""
	I0429 19:46:56.889104   53769 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 19:46:56.900725   53769 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 19:46:56.900752   53769 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 19:46:56.900759   53769 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 19:46:56.900808   53769 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 19:46:56.911936   53769 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:46:56.912355   53769 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-031254" does not appear in /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:46:56.912478   53769 kubeconfig.go:62] /home/jenkins/minikube-integration/18774-7754/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-031254" cluster setting kubeconfig missing "test-preload-031254" context setting]
	I0429 19:46:56.912758   53769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:46:56.913331   53769 kapi.go:59] client config for test-preload-031254: &rest.Config{Host:"https://192.168.39.46:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/client.crt", KeyFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/client.key", CAFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 19:46:56.913859   53769 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 19:46:56.924207   53769 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.46
	I0429 19:46:56.924233   53769 kubeadm.go:1154] stopping kube-system containers ...
	I0429 19:46:56.924245   53769 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 19:46:56.924294   53769 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 19:46:56.972256   53769 cri.go:89] found id: ""
	I0429 19:46:56.972348   53769 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 19:46:56.989453   53769 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 19:46:57.000255   53769 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 19:46:57.000273   53769 kubeadm.go:156] found existing configuration files:
	
	I0429 19:46:57.000310   53769 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 19:46:57.010408   53769 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 19:46:57.010458   53769 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 19:46:57.021146   53769 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 19:46:57.031524   53769 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 19:46:57.031584   53769 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 19:46:57.042291   53769 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 19:46:57.052483   53769 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 19:46:57.052521   53769 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 19:46:57.063097   53769 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 19:46:57.073123   53769 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 19:46:57.073166   53769 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 19:46:57.083520   53769 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 19:46:57.094182   53769 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:46:57.189660   53769 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:46:57.730845   53769 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:46:58.000090   53769 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:46:58.087119   53769 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:46:58.221962   53769 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:46:58.222077   53769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:46:58.722190   53769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:46:59.222965   53769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:46:59.257771   53769 api_server.go:72] duration metric: took 1.035805333s to wait for apiserver process to appear ...
	I0429 19:46:59.257806   53769 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:46:59.257828   53769 api_server.go:253] Checking apiserver healthz at https://192.168.39.46:8443/healthz ...
	I0429 19:46:59.258485   53769 api_server.go:269] stopped: https://192.168.39.46:8443/healthz: Get "https://192.168.39.46:8443/healthz": dial tcp 192.168.39.46:8443: connect: connection refused
	I0429 19:46:59.758265   53769 api_server.go:253] Checking apiserver healthz at https://192.168.39.46:8443/healthz ...
	I0429 19:47:03.254110   53769 api_server.go:279] https://192.168.39.46:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 19:47:03.254151   53769 api_server.go:103] status: https://192.168.39.46:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 19:47:03.254166   53769 api_server.go:253] Checking apiserver healthz at https://192.168.39.46:8443/healthz ...
	I0429 19:47:03.323111   53769 api_server.go:279] https://192.168.39.46:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 19:47:03.323139   53769 api_server.go:103] status: https://192.168.39.46:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 19:47:03.323153   53769 api_server.go:253] Checking apiserver healthz at https://192.168.39.46:8443/healthz ...
	I0429 19:47:03.341181   53769 api_server.go:279] https://192.168.39.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:47:03.341218   53769 api_server.go:103] status: https://192.168.39.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:47:03.758761   53769 api_server.go:253] Checking apiserver healthz at https://192.168.39.46:8443/healthz ...
	I0429 19:47:03.768749   53769 api_server.go:279] https://192.168.39.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:47:03.768776   53769 api_server.go:103] status: https://192.168.39.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:47:04.258351   53769 api_server.go:253] Checking apiserver healthz at https://192.168.39.46:8443/healthz ...
	I0429 19:47:04.271293   53769 api_server.go:279] https://192.168.39.46:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:47:04.271322   53769 api_server.go:103] status: https://192.168.39.46:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:47:04.758903   53769 api_server.go:253] Checking apiserver healthz at https://192.168.39.46:8443/healthz ...
	I0429 19:47:04.766680   53769 api_server.go:279] https://192.168.39.46:8443/healthz returned 200:
	ok
	I0429 19:47:04.775625   53769 api_server.go:141] control plane version: v1.24.4
	I0429 19:47:04.775646   53769 api_server.go:131] duration metric: took 5.517833359s to wait for apiserver health ...
	I0429 19:47:04.775654   53769 cni.go:84] Creating CNI manager for ""
	I0429 19:47:04.775660   53769 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:47:04.777520   53769 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 19:47:04.778871   53769 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 19:47:04.800680   53769 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 19:47:04.834501   53769 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:47:04.848302   53769 system_pods.go:59] 7 kube-system pods found
	I0429 19:47:04.848335   53769 system_pods.go:61] "coredns-6d4b75cb6d-n967p" [348589ed-de0b-4408-b332-59ce536cf2e4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 19:47:04.848342   53769 system_pods.go:61] "etcd-test-preload-031254" [78ed3926-b6a5-4b09-a261-51546f737fbf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 19:47:04.848350   53769 system_pods.go:61] "kube-apiserver-test-preload-031254" [fa889da6-ce11-49ac-963b-dc41d2b576ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 19:47:04.848356   53769 system_pods.go:61] "kube-controller-manager-test-preload-031254" [bc9483eb-c6a3-4680-81ed-6bb181eed586] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 19:47:04.848361   53769 system_pods.go:61] "kube-proxy-twg4q" [34503dbf-f94a-40d3-9972-d14b58487c35] Running
	I0429 19:47:04.848367   53769 system_pods.go:61] "kube-scheduler-test-preload-031254" [2f686421-235f-4711-914b-f157b78b24fc] Running
	I0429 19:47:04.848375   53769 system_pods.go:61] "storage-provisioner" [2ced97b5-dd4a-4b9c-b006-e7739c446fef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 19:47:04.848387   53769 system_pods.go:74] duration metric: took 13.863057ms to wait for pod list to return data ...
	I0429 19:47:04.848401   53769 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:47:04.852350   53769 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:47:04.852382   53769 node_conditions.go:123] node cpu capacity is 2
	I0429 19:47:04.852396   53769 node_conditions.go:105] duration metric: took 3.98596ms to run NodePressure ...
	I0429 19:47:04.852420   53769 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:47:05.119860   53769 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 19:47:05.124216   53769 kubeadm.go:733] kubelet initialised
	I0429 19:47:05.124241   53769 kubeadm.go:734] duration metric: took 4.353191ms waiting for restarted kubelet to initialise ...
	I0429 19:47:05.124250   53769 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:47:05.130526   53769 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-n967p" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:05.135994   53769 pod_ready.go:97] node "test-preload-031254" hosting pod "coredns-6d4b75cb6d-n967p" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:05.136021   53769 pod_ready.go:81] duration metric: took 5.467295ms for pod "coredns-6d4b75cb6d-n967p" in "kube-system" namespace to be "Ready" ...
	E0429 19:47:05.136032   53769 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-031254" hosting pod "coredns-6d4b75cb6d-n967p" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:05.136040   53769 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:05.141592   53769 pod_ready.go:97] node "test-preload-031254" hosting pod "etcd-test-preload-031254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:05.141613   53769 pod_ready.go:81] duration metric: took 5.562047ms for pod "etcd-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	E0429 19:47:05.141622   53769 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-031254" hosting pod "etcd-test-preload-031254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:05.141630   53769 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:05.148357   53769 pod_ready.go:97] node "test-preload-031254" hosting pod "kube-apiserver-test-preload-031254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:05.148378   53769 pod_ready.go:81] duration metric: took 6.73686ms for pod "kube-apiserver-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	E0429 19:47:05.148387   53769 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-031254" hosting pod "kube-apiserver-test-preload-031254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:05.148394   53769 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:05.238680   53769 pod_ready.go:97] node "test-preload-031254" hosting pod "kube-controller-manager-test-preload-031254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:05.238716   53769 pod_ready.go:81] duration metric: took 90.31095ms for pod "kube-controller-manager-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	E0429 19:47:05.238728   53769 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-031254" hosting pod "kube-controller-manager-test-preload-031254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:05.238737   53769 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-twg4q" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:05.638700   53769 pod_ready.go:97] node "test-preload-031254" hosting pod "kube-proxy-twg4q" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:05.638730   53769 pod_ready.go:81] duration metric: took 399.983793ms for pod "kube-proxy-twg4q" in "kube-system" namespace to be "Ready" ...
	E0429 19:47:05.638739   53769 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-031254" hosting pod "kube-proxy-twg4q" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:05.638744   53769 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:06.038823   53769 pod_ready.go:97] node "test-preload-031254" hosting pod "kube-scheduler-test-preload-031254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:06.038856   53769 pod_ready.go:81] duration metric: took 400.105298ms for pod "kube-scheduler-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	E0429 19:47:06.038868   53769 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-031254" hosting pod "kube-scheduler-test-preload-031254" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:06.038878   53769 pod_ready.go:38] duration metric: took 914.599447ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:47:06.038895   53769 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 19:47:06.051581   53769 ops.go:34] apiserver oom_adj: -16
	I0429 19:47:06.051603   53769 kubeadm.go:591] duration metric: took 9.150838556s to restartPrimaryControlPlane
	I0429 19:47:06.051611   53769 kubeadm.go:393] duration metric: took 9.201631743s to StartCluster
	I0429 19:47:06.051629   53769 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:47:06.051692   53769 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:47:06.052261   53769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:47:06.052479   53769 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:47:06.054266   53769 out.go:177] * Verifying Kubernetes components...
	I0429 19:47:06.052560   53769 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 19:47:06.052638   53769 config.go:182] Loaded profile config "test-preload-031254": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0429 19:47:06.055678   53769 addons.go:69] Setting storage-provisioner=true in profile "test-preload-031254"
	I0429 19:47:06.055704   53769 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:47:06.055707   53769 addons.go:234] Setting addon storage-provisioner=true in "test-preload-031254"
	W0429 19:47:06.055790   53769 addons.go:243] addon storage-provisioner should already be in state true
	I0429 19:47:06.055819   53769 host.go:66] Checking if "test-preload-031254" exists ...
	I0429 19:47:06.055704   53769 addons.go:69] Setting default-storageclass=true in profile "test-preload-031254"
	I0429 19:47:06.055888   53769 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-031254"
	I0429 19:47:06.056174   53769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:47:06.056222   53769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:47:06.056222   53769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:47:06.056388   53769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:47:06.070860   53769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0429 19:47:06.070872   53769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46239
	I0429 19:47:06.071282   53769 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:47:06.071282   53769 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:47:06.071725   53769 main.go:141] libmachine: Using API Version  1
	I0429 19:47:06.071742   53769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:47:06.071859   53769 main.go:141] libmachine: Using API Version  1
	I0429 19:47:06.071881   53769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:47:06.072100   53769 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:47:06.072136   53769 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:47:06.072259   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetState
	I0429 19:47:06.072591   53769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:47:06.072638   53769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:47:06.074787   53769 kapi.go:59] client config for test-preload-031254: &rest.Config{Host:"https://192.168.39.46:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/client.crt", KeyFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/test-preload-031254/client.key", CAFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 19:47:06.075107   53769 addons.go:234] Setting addon default-storageclass=true in "test-preload-031254"
	W0429 19:47:06.075123   53769 addons.go:243] addon default-storageclass should already be in state true
	I0429 19:47:06.075152   53769 host.go:66] Checking if "test-preload-031254" exists ...
	I0429 19:47:06.075532   53769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:47:06.075573   53769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:47:06.087097   53769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0429 19:47:06.087557   53769 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:47:06.088071   53769 main.go:141] libmachine: Using API Version  1
	I0429 19:47:06.088114   53769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:47:06.088522   53769 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:47:06.088730   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetState
	I0429 19:47:06.090032   53769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39095
	I0429 19:47:06.090484   53769 main.go:141] libmachine: (test-preload-031254) Calling .DriverName
	I0429 19:47:06.090502   53769 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:47:06.092600   53769 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:47:06.090947   53769 main.go:141] libmachine: Using API Version  1
	I0429 19:47:06.092625   53769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:47:06.092921   53769 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:47:06.094098   53769 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 19:47:06.094113   53769 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 19:47:06.094131   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHHostname
	I0429 19:47:06.094666   53769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:47:06.094712   53769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:47:06.097136   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:47:06.097535   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:47:06.097569   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:47:06.097811   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHPort
	I0429 19:47:06.097987   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:47:06.098133   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHUsername
	I0429 19:47:06.098319   53769 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/test-preload-031254/id_rsa Username:docker}
	I0429 19:47:06.109185   53769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
	I0429 19:47:06.109562   53769 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:47:06.110040   53769 main.go:141] libmachine: Using API Version  1
	I0429 19:47:06.110056   53769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:47:06.110419   53769 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:47:06.110648   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetState
	I0429 19:47:06.112270   53769 main.go:141] libmachine: (test-preload-031254) Calling .DriverName
	I0429 19:47:06.112503   53769 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 19:47:06.112515   53769 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 19:47:06.112526   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHHostname
	I0429 19:47:06.115023   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:47:06.115472   53769 main.go:141] libmachine: (test-preload-031254) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:5a:2d", ip: ""} in network mk-test-preload-031254: {Iface:virbr1 ExpiryTime:2024-04-29 20:46:33 +0000 UTC Type:0 Mac:52:54:00:b7:5a:2d Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:test-preload-031254 Clientid:01:52:54:00:b7:5a:2d}
	I0429 19:47:06.115500   53769 main.go:141] libmachine: (test-preload-031254) DBG | domain test-preload-031254 has defined IP address 192.168.39.46 and MAC address 52:54:00:b7:5a:2d in network mk-test-preload-031254
	I0429 19:47:06.115703   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHPort
	I0429 19:47:06.115884   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHKeyPath
	I0429 19:47:06.116041   53769 main.go:141] libmachine: (test-preload-031254) Calling .GetSSHUsername
	I0429 19:47:06.116184   53769 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/test-preload-031254/id_rsa Username:docker}
	I0429 19:47:06.242107   53769 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:47:06.260588   53769 node_ready.go:35] waiting up to 6m0s for node "test-preload-031254" to be "Ready" ...
	I0429 19:47:06.375755   53769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 19:47:06.403711   53769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 19:47:07.373403   53769 main.go:141] libmachine: Making call to close driver server
	I0429 19:47:07.373437   53769 main.go:141] libmachine: (test-preload-031254) Calling .Close
	I0429 19:47:07.373480   53769 main.go:141] libmachine: Making call to close driver server
	I0429 19:47:07.373508   53769 main.go:141] libmachine: (test-preload-031254) Calling .Close
	I0429 19:47:07.373741   53769 main.go:141] libmachine: Successfully made call to close driver server
	I0429 19:47:07.373760   53769 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 19:47:07.373770   53769 main.go:141] libmachine: Making call to close driver server
	I0429 19:47:07.373777   53769 main.go:141] libmachine: (test-preload-031254) Calling .Close
	I0429 19:47:07.373824   53769 main.go:141] libmachine: (test-preload-031254) DBG | Closing plugin on server side
	I0429 19:47:07.373873   53769 main.go:141] libmachine: Successfully made call to close driver server
	I0429 19:47:07.373897   53769 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 19:47:07.373911   53769 main.go:141] libmachine: Making call to close driver server
	I0429 19:47:07.373919   53769 main.go:141] libmachine: (test-preload-031254) Calling .Close
	I0429 19:47:07.374005   53769 main.go:141] libmachine: Successfully made call to close driver server
	I0429 19:47:07.374010   53769 main.go:141] libmachine: (test-preload-031254) DBG | Closing plugin on server side
	I0429 19:47:07.374024   53769 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 19:47:07.374222   53769 main.go:141] libmachine: Successfully made call to close driver server
	I0429 19:47:07.374251   53769 main.go:141] libmachine: (test-preload-031254) DBG | Closing plugin on server side
	I0429 19:47:07.374254   53769 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 19:47:07.382534   53769 main.go:141] libmachine: Making call to close driver server
	I0429 19:47:07.382556   53769 main.go:141] libmachine: (test-preload-031254) Calling .Close
	I0429 19:47:07.382773   53769 main.go:141] libmachine: Successfully made call to close driver server
	I0429 19:47:07.382790   53769 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 19:47:07.384595   53769 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 19:47:07.385947   53769 addons.go:505] duration metric: took 1.333396083s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 19:47:08.265409   53769 node_ready.go:53] node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:10.767968   53769 node_ready.go:53] node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:13.266999   53769 node_ready.go:53] node "test-preload-031254" has status "Ready":"False"
	I0429 19:47:13.764281   53769 node_ready.go:49] node "test-preload-031254" has status "Ready":"True"
	I0429 19:47:13.764304   53769 node_ready.go:38] duration metric: took 7.503682043s for node "test-preload-031254" to be "Ready" ...
	I0429 19:47:13.764312   53769 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:47:13.769023   53769 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-n967p" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:13.773855   53769 pod_ready.go:92] pod "coredns-6d4b75cb6d-n967p" in "kube-system" namespace has status "Ready":"True"
	I0429 19:47:13.773875   53769 pod_ready.go:81] duration metric: took 4.82939ms for pod "coredns-6d4b75cb6d-n967p" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:13.773884   53769 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:13.777905   53769 pod_ready.go:92] pod "etcd-test-preload-031254" in "kube-system" namespace has status "Ready":"True"
	I0429 19:47:13.777925   53769 pod_ready.go:81] duration metric: took 4.035007ms for pod "etcd-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:13.777934   53769 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:13.782340   53769 pod_ready.go:92] pod "kube-apiserver-test-preload-031254" in "kube-system" namespace has status "Ready":"True"
	I0429 19:47:13.782357   53769 pod_ready.go:81] duration metric: took 4.415158ms for pod "kube-apiserver-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:13.782366   53769 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:13.786857   53769 pod_ready.go:92] pod "kube-controller-manager-test-preload-031254" in "kube-system" namespace has status "Ready":"True"
	I0429 19:47:13.786874   53769 pod_ready.go:81] duration metric: took 4.501941ms for pod "kube-controller-manager-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:13.786882   53769 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-twg4q" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:14.164994   53769 pod_ready.go:92] pod "kube-proxy-twg4q" in "kube-system" namespace has status "Ready":"True"
	I0429 19:47:14.165022   53769 pod_ready.go:81] duration metric: took 378.133343ms for pod "kube-proxy-twg4q" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:14.165034   53769 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:16.172381   53769 pod_ready.go:102] pod "kube-scheduler-test-preload-031254" in "kube-system" namespace has status "Ready":"False"
	I0429 19:47:17.672357   53769 pod_ready.go:92] pod "kube-scheduler-test-preload-031254" in "kube-system" namespace has status "Ready":"True"
	I0429 19:47:17.672386   53769 pod_ready.go:81] duration metric: took 3.507343506s for pod "kube-scheduler-test-preload-031254" in "kube-system" namespace to be "Ready" ...
	I0429 19:47:17.672410   53769 pod_ready.go:38] duration metric: took 3.908078177s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:47:17.672436   53769 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:47:17.672509   53769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:47:17.689312   53769 api_server.go:72] duration metric: took 11.636804158s to wait for apiserver process to appear ...
	I0429 19:47:17.689342   53769 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:47:17.689369   53769 api_server.go:253] Checking apiserver healthz at https://192.168.39.46:8443/healthz ...
	I0429 19:47:17.695287   53769 api_server.go:279] https://192.168.39.46:8443/healthz returned 200:
	ok
	I0429 19:47:17.696370   53769 api_server.go:141] control plane version: v1.24.4
	I0429 19:47:17.696390   53769 api_server.go:131] duration metric: took 7.0399ms to wait for apiserver health ...
	I0429 19:47:17.696399   53769 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:47:17.702368   53769 system_pods.go:59] 7 kube-system pods found
	I0429 19:47:17.702391   53769 system_pods.go:61] "coredns-6d4b75cb6d-n967p" [348589ed-de0b-4408-b332-59ce536cf2e4] Running
	I0429 19:47:17.702398   53769 system_pods.go:61] "etcd-test-preload-031254" [78ed3926-b6a5-4b09-a261-51546f737fbf] Running
	I0429 19:47:17.702403   53769 system_pods.go:61] "kube-apiserver-test-preload-031254" [fa889da6-ce11-49ac-963b-dc41d2b576ed] Running
	I0429 19:47:17.702408   53769 system_pods.go:61] "kube-controller-manager-test-preload-031254" [bc9483eb-c6a3-4680-81ed-6bb181eed586] Running
	I0429 19:47:17.702413   53769 system_pods.go:61] "kube-proxy-twg4q" [34503dbf-f94a-40d3-9972-d14b58487c35] Running
	I0429 19:47:17.702418   53769 system_pods.go:61] "kube-scheduler-test-preload-031254" [2f686421-235f-4711-914b-f157b78b24fc] Running
	I0429 19:47:17.702426   53769 system_pods.go:61] "storage-provisioner" [2ced97b5-dd4a-4b9c-b006-e7739c446fef] Running
	I0429 19:47:17.702435   53769 system_pods.go:74] duration metric: took 6.029333ms to wait for pod list to return data ...
	I0429 19:47:17.702444   53769 default_sa.go:34] waiting for default service account to be created ...
	I0429 19:47:17.765469   53769 default_sa.go:45] found service account: "default"
	I0429 19:47:17.765497   53769 default_sa.go:55] duration metric: took 63.045299ms for default service account to be created ...
	I0429 19:47:17.765507   53769 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 19:47:17.968791   53769 system_pods.go:86] 7 kube-system pods found
	I0429 19:47:17.968824   53769 system_pods.go:89] "coredns-6d4b75cb6d-n967p" [348589ed-de0b-4408-b332-59ce536cf2e4] Running
	I0429 19:47:17.968841   53769 system_pods.go:89] "etcd-test-preload-031254" [78ed3926-b6a5-4b09-a261-51546f737fbf] Running
	I0429 19:47:17.968847   53769 system_pods.go:89] "kube-apiserver-test-preload-031254" [fa889da6-ce11-49ac-963b-dc41d2b576ed] Running
	I0429 19:47:17.968854   53769 system_pods.go:89] "kube-controller-manager-test-preload-031254" [bc9483eb-c6a3-4680-81ed-6bb181eed586] Running
	I0429 19:47:17.968859   53769 system_pods.go:89] "kube-proxy-twg4q" [34503dbf-f94a-40d3-9972-d14b58487c35] Running
	I0429 19:47:17.968868   53769 system_pods.go:89] "kube-scheduler-test-preload-031254" [2f686421-235f-4711-914b-f157b78b24fc] Running
	I0429 19:47:17.968873   53769 system_pods.go:89] "storage-provisioner" [2ced97b5-dd4a-4b9c-b006-e7739c446fef] Running
	I0429 19:47:17.968882   53769 system_pods.go:126] duration metric: took 203.369238ms to wait for k8s-apps to be running ...
	I0429 19:47:17.968896   53769 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 19:47:17.968953   53769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:47:17.984321   53769 system_svc.go:56] duration metric: took 15.415148ms WaitForService to wait for kubelet
	I0429 19:47:17.984355   53769 kubeadm.go:576] duration metric: took 11.931850072s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:47:17.984373   53769 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:47:18.165030   53769 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:47:18.165059   53769 node_conditions.go:123] node cpu capacity is 2
	I0429 19:47:18.165070   53769 node_conditions.go:105] duration metric: took 180.692529ms to run NodePressure ...
	I0429 19:47:18.165085   53769 start.go:240] waiting for startup goroutines ...
	I0429 19:47:18.165095   53769 start.go:245] waiting for cluster config update ...
	I0429 19:47:18.165114   53769 start.go:254] writing updated cluster config ...
	I0429 19:47:18.165468   53769 ssh_runner.go:195] Run: rm -f paused
	I0429 19:47:18.213613   53769 start.go:600] kubectl: 1.30.0, cluster: 1.24.4 (minor skew: 6)
	I0429 19:47:18.215368   53769 out.go:177] 
	W0429 19:47:18.216660   53769 out.go:239] ! /usr/local/bin/kubectl is version 1.30.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0429 19:47:18.217913   53769 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0429 19:47:18.219284   53769 out.go:177] * Done! kubectl is now configured to use "test-preload-031254" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.170785324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420039170761860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33669234-d0d1-4f22-9a53-e43d57340cf5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.171451161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77956b20-a047-4c29-941e-79918f1e13b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.171567377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77956b20-a047-4c29-941e-79918f1e13b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.171726794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86410095f767d0fcbc349574041413c16342cddc5c1fe2d82ab1963582ddee6a,PodSandboxId:e6b3d3d6cd8c647b0bea15f431ce82a8c7dffb91e3400fbd2453f1f281a5291b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1714420032109751109,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-n967p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348589ed-de0b-4408-b332-59ce536cf2e4,},Annotations:map[string]string{io.kubernetes.container.hash: 503e6672,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff65ba0c988e6c2fd413cdf53ae8fd284e0632cf1ca7c5863151eb0fccd8c89,PodSandboxId:6dd93b079cf849b4966c75334dc8dda530710f41cc87a5a301857ff138dab4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714420025445320366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 2ced97b5-dd4a-4b9c-b006-e7739c446fef,},Annotations:map[string]string{io.kubernetes.container.hash: 805d61f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7799e49eaea19ca69d5fd3c661de9d8fb39e65c62c95dc4d1d0d2948854a639c,PodSandboxId:289e0162085c72d97acb63e70673d1bcdddfda58f65e810e59230e476c57df78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1714420024846377298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twg4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34
503dbf-f94a-40d3-9972-d14b58487c35,},Annotations:map[string]string{io.kubernetes.container.hash: 5005a2a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7ac94404f935390c9156ada20a72a24310cf9d9ccfce9853e7fee52edd3b02,PodSandboxId:78761038c0a7ba7ab64ab591145bdec412eaac8c36bf0f25ad93b3e01af15bf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1714420018968070587,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e6c54d730a5318f94468f58ac405a1,},Anno
tations:map[string]string{io.kubernetes.container.hash: ae3a9d22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9874c421aadf95b583c7aade534641d825e062d6423ee18a197289b7b458e8,PodSandboxId:086aae5ecaacf33e72bb0ab34a8729f854e0cc326af2eeecf6fa88ca29f18f30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1714420018984021908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9544572b22422125fe88c312c3459bb0,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8202e8dfca618a649a80d0b8753439184b1295da3b4ef7f12d3ce5542ce2a6ca,PodSandboxId:581c72b7ce651ea07cda7bd14f4957dd0ca8d42e91e440238d6febc0ff6c63b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1714420018956368538,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adff35bf255b79c5db0a9ab3ca3f80e7,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a917f2f0e5e797e11db5cb057c5051135d6b610008853b8d068a13f693679d5,PodSandboxId:71692e33fcf050cca9a163c38680a659512b0b80090f653cd30236627dd2f8a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1714420018846269350,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652d6cadbe8d440ac8b6e72a067e6d13,},Annotation
s:map[string]string{io.kubernetes.container.hash: 268f455f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77956b20-a047-4c29-941e-79918f1e13b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.213524260Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=666b7e13-b5c9-4fee-97b3-23efabc753a8 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.213607170Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=666b7e13-b5c9-4fee-97b3-23efabc753a8 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.214911956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=efa8a502-38f4-4f16-b872-bb4d327355a1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.215339417Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420039215318675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efa8a502-38f4-4f16-b872-bb4d327355a1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.216299832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db6bc973-4269-4960-b3fd-1e1ef2c70234 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.216353920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db6bc973-4269-4960-b3fd-1e1ef2c70234 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.216577180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86410095f767d0fcbc349574041413c16342cddc5c1fe2d82ab1963582ddee6a,PodSandboxId:e6b3d3d6cd8c647b0bea15f431ce82a8c7dffb91e3400fbd2453f1f281a5291b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1714420032109751109,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-n967p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348589ed-de0b-4408-b332-59ce536cf2e4,},Annotations:map[string]string{io.kubernetes.container.hash: 503e6672,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff65ba0c988e6c2fd413cdf53ae8fd284e0632cf1ca7c5863151eb0fccd8c89,PodSandboxId:6dd93b079cf849b4966c75334dc8dda530710f41cc87a5a301857ff138dab4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714420025445320366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 2ced97b5-dd4a-4b9c-b006-e7739c446fef,},Annotations:map[string]string{io.kubernetes.container.hash: 805d61f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7799e49eaea19ca69d5fd3c661de9d8fb39e65c62c95dc4d1d0d2948854a639c,PodSandboxId:289e0162085c72d97acb63e70673d1bcdddfda58f65e810e59230e476c57df78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1714420024846377298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twg4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34
503dbf-f94a-40d3-9972-d14b58487c35,},Annotations:map[string]string{io.kubernetes.container.hash: 5005a2a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7ac94404f935390c9156ada20a72a24310cf9d9ccfce9853e7fee52edd3b02,PodSandboxId:78761038c0a7ba7ab64ab591145bdec412eaac8c36bf0f25ad93b3e01af15bf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1714420018968070587,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e6c54d730a5318f94468f58ac405a1,},Anno
tations:map[string]string{io.kubernetes.container.hash: ae3a9d22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9874c421aadf95b583c7aade534641d825e062d6423ee18a197289b7b458e8,PodSandboxId:086aae5ecaacf33e72bb0ab34a8729f854e0cc326af2eeecf6fa88ca29f18f30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1714420018984021908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9544572b22422125fe88c312c3459bb0,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8202e8dfca618a649a80d0b8753439184b1295da3b4ef7f12d3ce5542ce2a6ca,PodSandboxId:581c72b7ce651ea07cda7bd14f4957dd0ca8d42e91e440238d6febc0ff6c63b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1714420018956368538,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adff35bf255b79c5db0a9ab3ca3f80e7,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a917f2f0e5e797e11db5cb057c5051135d6b610008853b8d068a13f693679d5,PodSandboxId:71692e33fcf050cca9a163c38680a659512b0b80090f653cd30236627dd2f8a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1714420018846269350,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652d6cadbe8d440ac8b6e72a067e6d13,},Annotation
s:map[string]string{io.kubernetes.container.hash: 268f455f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db6bc973-4269-4960-b3fd-1e1ef2c70234 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.266381400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d224a4d6-d77b-4f1f-8994-aa1e2a298598 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.266523485Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d224a4d6-d77b-4f1f-8994-aa1e2a298598 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.267782203Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e12a6964-211d-4c4d-9b17-f3b030e93500 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.268212730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420039268190922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e12a6964-211d-4c4d-9b17-f3b030e93500 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.268847487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e954ad5-9ab1-4da7-8915-9cdc52cd5340 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.268901959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e954ad5-9ab1-4da7-8915-9cdc52cd5340 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.269071136Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86410095f767d0fcbc349574041413c16342cddc5c1fe2d82ab1963582ddee6a,PodSandboxId:e6b3d3d6cd8c647b0bea15f431ce82a8c7dffb91e3400fbd2453f1f281a5291b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1714420032109751109,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-n967p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348589ed-de0b-4408-b332-59ce536cf2e4,},Annotations:map[string]string{io.kubernetes.container.hash: 503e6672,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff65ba0c988e6c2fd413cdf53ae8fd284e0632cf1ca7c5863151eb0fccd8c89,PodSandboxId:6dd93b079cf849b4966c75334dc8dda530710f41cc87a5a301857ff138dab4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714420025445320366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 2ced97b5-dd4a-4b9c-b006-e7739c446fef,},Annotations:map[string]string{io.kubernetes.container.hash: 805d61f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7799e49eaea19ca69d5fd3c661de9d8fb39e65c62c95dc4d1d0d2948854a639c,PodSandboxId:289e0162085c72d97acb63e70673d1bcdddfda58f65e810e59230e476c57df78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1714420024846377298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twg4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34
503dbf-f94a-40d3-9972-d14b58487c35,},Annotations:map[string]string{io.kubernetes.container.hash: 5005a2a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7ac94404f935390c9156ada20a72a24310cf9d9ccfce9853e7fee52edd3b02,PodSandboxId:78761038c0a7ba7ab64ab591145bdec412eaac8c36bf0f25ad93b3e01af15bf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1714420018968070587,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e6c54d730a5318f94468f58ac405a1,},Anno
tations:map[string]string{io.kubernetes.container.hash: ae3a9d22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9874c421aadf95b583c7aade534641d825e062d6423ee18a197289b7b458e8,PodSandboxId:086aae5ecaacf33e72bb0ab34a8729f854e0cc326af2eeecf6fa88ca29f18f30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1714420018984021908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9544572b22422125fe88c312c3459bb0,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8202e8dfca618a649a80d0b8753439184b1295da3b4ef7f12d3ce5542ce2a6ca,PodSandboxId:581c72b7ce651ea07cda7bd14f4957dd0ca8d42e91e440238d6febc0ff6c63b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1714420018956368538,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adff35bf255b79c5db0a9ab3ca3f80e7,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a917f2f0e5e797e11db5cb057c5051135d6b610008853b8d068a13f693679d5,PodSandboxId:71692e33fcf050cca9a163c38680a659512b0b80090f653cd30236627dd2f8a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1714420018846269350,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652d6cadbe8d440ac8b6e72a067e6d13,},Annotation
s:map[string]string{io.kubernetes.container.hash: 268f455f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e954ad5-9ab1-4da7-8915-9cdc52cd5340 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.313157121Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=081684bb-523a-4e9a-8bbe-6bc03a9d3d83 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.313228549Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=081684bb-523a-4e9a-8bbe-6bc03a9d3d83 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.316259584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfe6870c-3a5c-4142-9b1d-fc5a7e451e40 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.316837872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420039316812237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfe6870c-3a5c-4142-9b1d-fc5a7e451e40 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.317888955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca36b995-c148-4eda-ac2e-916936e7c46e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.317946412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca36b995-c148-4eda-ac2e-916936e7c46e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:47:19 test-preload-031254 crio[680]: time="2024-04-29 19:47:19.318106280Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86410095f767d0fcbc349574041413c16342cddc5c1fe2d82ab1963582ddee6a,PodSandboxId:e6b3d3d6cd8c647b0bea15f431ce82a8c7dffb91e3400fbd2453f1f281a5291b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1714420032109751109,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-n967p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 348589ed-de0b-4408-b332-59ce536cf2e4,},Annotations:map[string]string{io.kubernetes.container.hash: 503e6672,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ff65ba0c988e6c2fd413cdf53ae8fd284e0632cf1ca7c5863151eb0fccd8c89,PodSandboxId:6dd93b079cf849b4966c75334dc8dda530710f41cc87a5a301857ff138dab4f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714420025445320366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 2ced97b5-dd4a-4b9c-b006-e7739c446fef,},Annotations:map[string]string{io.kubernetes.container.hash: 805d61f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7799e49eaea19ca69d5fd3c661de9d8fb39e65c62c95dc4d1d0d2948854a639c,PodSandboxId:289e0162085c72d97acb63e70673d1bcdddfda58f65e810e59230e476c57df78,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1714420024846377298,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-twg4q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34
503dbf-f94a-40d3-9972-d14b58487c35,},Annotations:map[string]string{io.kubernetes.container.hash: 5005a2a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7ac94404f935390c9156ada20a72a24310cf9d9ccfce9853e7fee52edd3b02,PodSandboxId:78761038c0a7ba7ab64ab591145bdec412eaac8c36bf0f25ad93b3e01af15bf6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1714420018968070587,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0e6c54d730a5318f94468f58ac405a1,},Anno
tations:map[string]string{io.kubernetes.container.hash: ae3a9d22,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9874c421aadf95b583c7aade534641d825e062d6423ee18a197289b7b458e8,PodSandboxId:086aae5ecaacf33e72bb0ab34a8729f854e0cc326af2eeecf6fa88ca29f18f30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1714420018984021908,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9544572b22422125fe88c312c3459bb0,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8202e8dfca618a649a80d0b8753439184b1295da3b4ef7f12d3ce5542ce2a6ca,PodSandboxId:581c72b7ce651ea07cda7bd14f4957dd0ca8d42e91e440238d6febc0ff6c63b0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1714420018956368538,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adff35bf255b79c5db0a9ab3ca3f80e7,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a917f2f0e5e797e11db5cb057c5051135d6b610008853b8d068a13f693679d5,PodSandboxId:71692e33fcf050cca9a163c38680a659512b0b80090f653cd30236627dd2f8a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1714420018846269350,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-031254,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 652d6cadbe8d440ac8b6e72a067e6d13,},Annotation
s:map[string]string{io.kubernetes.container.hash: 268f455f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca36b995-c148-4eda-ac2e-916936e7c46e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	86410095f767d       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   e6b3d3d6cd8c6       coredns-6d4b75cb6d-n967p
	9ff65ba0c988e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   6dd93b079cf84       storage-provisioner
	7799e49eaea19       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   289e0162085c7       kube-proxy-twg4q
	df9874c421aad       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   086aae5ecaacf       kube-scheduler-test-preload-031254
	5f7ac94404f93       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   78761038c0a7b       etcd-test-preload-031254
	8202e8dfca618       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   581c72b7ce651       kube-controller-manager-test-preload-031254
	1a917f2f0e5e7       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   71692e33fcf05       kube-apiserver-test-preload-031254
	
	
	==> coredns [86410095f767d0fcbc349574041413c16342cddc5c1fe2d82ab1963582ddee6a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:51472 - 3371 "HINFO IN 3456907067295184731.8212442085159974892. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027124489s
	
	
	==> describe nodes <==
	Name:               test-preload-031254
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-031254
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=test-preload-031254
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T19_45_30_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:45:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-031254
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:47:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:47:13 +0000   Mon, 29 Apr 2024 19:45:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:47:13 +0000   Mon, 29 Apr 2024 19:45:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:47:13 +0000   Mon, 29 Apr 2024 19:45:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:47:13 +0000   Mon, 29 Apr 2024 19:47:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.46
	  Hostname:    test-preload-031254
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 49f5a50d1ac94feab4a327f0a45b8d7e
	  System UUID:                49f5a50d-1ac9-4fea-b4a3-27f0a45b8d7e
	  Boot ID:                    d48213c5-e19d-42c9-b6b4-24c4021ec590
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-n967p                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     96s
	  kube-system                 etcd-test-preload-031254                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         109s
	  kube-system                 kube-apiserver-test-preload-031254             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-controller-manager-test-preload-031254    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-proxy-twg4q                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-scheduler-test-preload-031254             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 94s                kube-proxy       
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node test-preload-031254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node test-preload-031254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node test-preload-031254 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  109s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                99s                kubelet          Node test-preload-031254 status is now: NodeReady
	  Normal  RegisteredNode           97s                node-controller  Node test-preload-031254 event: Registered Node test-preload-031254 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-031254 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-031254 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-031254 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                 node-controller  Node test-preload-031254 event: Registered Node test-preload-031254 in Controller
	
	
	==> dmesg <==
	[Apr29 19:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052355] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043678] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.716148] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.527240] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.667140] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.619798] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.059495] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069247] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.178348] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.146856] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.278820] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[ +13.118539] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.061710] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.635661] systemd-fstab-generator[1072]: Ignoring "noauto" option for root device
	[Apr29 19:47] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.031226] systemd-fstab-generator[1714]: Ignoring "noauto" option for root device
	[  +5.745356] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [5f7ac94404f935390c9156ada20a72a24310cf9d9ccfce9853e7fee52edd3b02] <==
	{"level":"info","ts":"2024-04-29T19:46:59.520Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"c6cb63b0b7b4b88","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-29T19:46:59.538Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T19:46:59.538Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c6cb63b0b7b4b88","initial-advertise-peer-urls":["https://192.168.39.46:2380"],"listen-peer-urls":["https://192.168.39.46:2380"],"advertise-client-urls":["https://192.168.39.46:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.46:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:46:59.539Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T19:46:59.539Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-29T19:46:59.539Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.46:2380"}
	{"level":"info","ts":"2024-04-29T19:46:59.539Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.46:2380"}
	{"level":"info","ts":"2024-04-29T19:46:59.539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6cb63b0b7b4b88 switched to configuration voters=(895290790651841416)"}
	{"level":"info","ts":"2024-04-29T19:46:59.539Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"894ce967435a7a53","local-member-id":"c6cb63b0b7b4b88","added-peer-id":"c6cb63b0b7b4b88","added-peer-peer-urls":["https://192.168.39.46:2380"]}
	{"level":"info","ts":"2024-04-29T19:46:59.539Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"894ce967435a7a53","local-member-id":"c6cb63b0b7b4b88","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:46:59.539Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:47:00.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6cb63b0b7b4b88 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T19:47:00.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6cb63b0b7b4b88 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T19:47:00.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6cb63b0b7b4b88 received MsgPreVoteResp from c6cb63b0b7b4b88 at term 2"}
	{"level":"info","ts":"2024-04-29T19:47:00.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6cb63b0b7b4b88 became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T19:47:00.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6cb63b0b7b4b88 received MsgVoteResp from c6cb63b0b7b4b88 at term 3"}
	{"level":"info","ts":"2024-04-29T19:47:00.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6cb63b0b7b4b88 became leader at term 3"}
	{"level":"info","ts":"2024-04-29T19:47:00.754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c6cb63b0b7b4b88 elected leader c6cb63b0b7b4b88 at term 3"}
	{"level":"info","ts":"2024-04-29T19:47:00.756Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"c6cb63b0b7b4b88","local-member-attributes":"{Name:test-preload-031254 ClientURLs:[https://192.168.39.46:2379]}","request-path":"/0/members/c6cb63b0b7b4b88/attributes","cluster-id":"894ce967435a7a53","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:47:00.756Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:47:00.757Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:47:00.758Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T19:47:00.759Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.46:2379"}
	{"level":"info","ts":"2024-04-29T19:47:00.759Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T19:47:00.759Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:47:19 up 0 min,  0 users,  load average: 1.32, 0.33, 0.11
	Linux test-preload-031254 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1a917f2f0e5e797e11db5cb057c5051135d6b610008853b8d068a13f693679d5] <==
	I0429 19:47:03.204713       1 establishing_controller.go:76] Starting EstablishingController
	I0429 19:47:03.204877       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0429 19:47:03.204925       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0429 19:47:03.207302       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0429 19:47:03.235419       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0429 19:47:03.265153       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0429 19:47:03.330950       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0429 19:47:03.335224       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 19:47:03.340775       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0429 19:47:03.366587       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 19:47:03.369580       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0429 19:47:03.371272       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:47:03.379695       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0429 19:47:03.402948       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0429 19:47:03.408174       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:47:03.836626       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0429 19:47:04.171392       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 19:47:05.014883       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0429 19:47:05.029945       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0429 19:47:05.065693       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0429 19:47:05.096085       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 19:47:05.102781       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 19:47:05.279341       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0429 19:47:15.962674       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 19:47:15.988957       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8202e8dfca618a649a80d0b8753439184b1295da3b4ef7f12d3ce5542ce2a6ca] <==
	I0429 19:47:15.774552       1 shared_informer.go:262] Caches are synced for PV protection
	I0429 19:47:15.776085       1 shared_informer.go:262] Caches are synced for stateful set
	I0429 19:47:15.776158       1 shared_informer.go:262] Caches are synced for cronjob
	I0429 19:47:15.779913       1 shared_informer.go:262] Caches are synced for PVC protection
	I0429 19:47:15.782619       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0429 19:47:15.782676       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0429 19:47:15.782693       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0429 19:47:15.782711       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0429 19:47:15.794792       1 shared_informer.go:262] Caches are synced for ephemeral
	I0429 19:47:15.798557       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0429 19:47:15.812063       1 shared_informer.go:262] Caches are synced for HPA
	I0429 19:47:15.934705       1 shared_informer.go:262] Caches are synced for taint
	I0429 19:47:15.934983       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0429 19:47:15.935310       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0429 19:47:15.935700       1 event.go:294] "Event occurred" object="test-preload-031254" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-031254 event: Registered Node test-preload-031254 in Controller"
	W0429 19:47:15.936448       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-031254. Assuming now as a timestamp.
	I0429 19:47:15.936668       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0429 19:47:15.948414       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0429 19:47:15.978977       1 shared_informer.go:262] Caches are synced for endpoint
	I0429 19:47:15.982335       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0429 19:47:15.991730       1 shared_informer.go:262] Caches are synced for resource quota
	I0429 19:47:16.004702       1 shared_informer.go:262] Caches are synced for resource quota
	I0429 19:47:16.430785       1 shared_informer.go:262] Caches are synced for garbage collector
	I0429 19:47:16.466925       1 shared_informer.go:262] Caches are synced for garbage collector
	I0429 19:47:16.466970       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [7799e49eaea19ca69d5fd3c661de9d8fb39e65c62c95dc4d1d0d2948854a639c] <==
	I0429 19:47:05.214244       1 node.go:163] Successfully retrieved node IP: 192.168.39.46
	I0429 19:47:05.214365       1 server_others.go:138] "Detected node IP" address="192.168.39.46"
	I0429 19:47:05.214436       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0429 19:47:05.265638       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0429 19:47:05.265676       1 server_others.go:206] "Using iptables Proxier"
	I0429 19:47:05.266269       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0429 19:47:05.267392       1 server.go:661] "Version info" version="v1.24.4"
	I0429 19:47:05.267430       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:47:05.268569       1 config.go:226] "Starting endpoint slice config controller"
	I0429 19:47:05.268754       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0429 19:47:05.268907       1 config.go:317] "Starting service config controller"
	I0429 19:47:05.268965       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0429 19:47:05.270153       1 config.go:444] "Starting node config controller"
	I0429 19:47:05.271046       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0429 19:47:05.376644       1 shared_informer.go:262] Caches are synced for node config
	I0429 19:47:05.377322       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0429 19:47:05.377419       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [df9874c421aadf95b583c7aade534641d825e062d6423ee18a197289b7b458e8] <==
	I0429 19:46:59.965404       1 serving.go:348] Generated self-signed cert in-memory
	W0429 19:47:03.249060       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 19:47:03.249269       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 19:47:03.249284       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 19:47:03.249292       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 19:47:03.332792       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0429 19:47:03.332873       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:47:03.335654       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0429 19:47:03.335991       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 19:47:03.336071       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 19:47:03.336257       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 19:47:03.436532       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 19:47:03 test-preload-031254 kubelet[1079]: I0429 19:47:03.360810    1079 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-031254"
	Apr 29 19:47:03 test-preload-031254 kubelet[1079]: I0429 19:47:03.366209    1079 setters.go:532] "Node became not ready" node="test-preload-031254" condition={Type:Ready Status:False LastHeartbeatTime:2024-04-29 19:47:03.366155932 +0000 UTC m=+5.376166793 LastTransitionTime:2024-04-29 19:47:03.366155932 +0000 UTC m=+5.376166793 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.108037    1079 apiserver.go:52] "Watching apiserver"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.112971    1079 topology_manager.go:200] "Topology Admit Handler"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.113048    1079 topology_manager.go:200] "Topology Admit Handler"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.113082    1079 topology_manager.go:200] "Topology Admit Handler"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: E0429 19:47:04.117072    1079 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-n967p" podUID=348589ed-de0b-4408-b332-59ce536cf2e4
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.178901    1079 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34503dbf-f94a-40d3-9972-d14b58487c35-lib-modules\") pod \"kube-proxy-twg4q\" (UID: \"34503dbf-f94a-40d3-9972-d14b58487c35\") " pod="kube-system/kube-proxy-twg4q"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.178971    1079 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d25jq\" (UniqueName: \"kubernetes.io/projected/34503dbf-f94a-40d3-9972-d14b58487c35-kube-api-access-d25jq\") pod \"kube-proxy-twg4q\" (UID: \"34503dbf-f94a-40d3-9972-d14b58487c35\") " pod="kube-system/kube-proxy-twg4q"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.179000    1079 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/348589ed-de0b-4408-b332-59ce536cf2e4-config-volume\") pod \"coredns-6d4b75cb6d-n967p\" (UID: \"348589ed-de0b-4408-b332-59ce536cf2e4\") " pod="kube-system/coredns-6d4b75cb6d-n967p"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.179020    1079 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2ced97b5-dd4a-4b9c-b006-e7739c446fef-tmp\") pod \"storage-provisioner\" (UID: \"2ced97b5-dd4a-4b9c-b006-e7739c446fef\") " pod="kube-system/storage-provisioner"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.179037    1079 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/34503dbf-f94a-40d3-9972-d14b58487c35-kube-proxy\") pod \"kube-proxy-twg4q\" (UID: \"34503dbf-f94a-40d3-9972-d14b58487c35\") " pod="kube-system/kube-proxy-twg4q"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.179057    1079 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34503dbf-f94a-40d3-9972-d14b58487c35-xtables-lock\") pod \"kube-proxy-twg4q\" (UID: \"34503dbf-f94a-40d3-9972-d14b58487c35\") " pod="kube-system/kube-proxy-twg4q"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.179076    1079 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7r9q\" (UniqueName: \"kubernetes.io/projected/348589ed-de0b-4408-b332-59ce536cf2e4-kube-api-access-c7r9q\") pod \"coredns-6d4b75cb6d-n967p\" (UID: \"348589ed-de0b-4408-b332-59ce536cf2e4\") " pod="kube-system/coredns-6d4b75cb6d-n967p"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.179099    1079 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klskl\" (UniqueName: \"kubernetes.io/projected/2ced97b5-dd4a-4b9c-b006-e7739c446fef-kube-api-access-klskl\") pod \"storage-provisioner\" (UID: \"2ced97b5-dd4a-4b9c-b006-e7739c446fef\") " pod="kube-system/storage-provisioner"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: I0429 19:47:04.179126    1079 reconciler.go:159] "Reconciler: start to sync state"
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: E0429 19:47:04.283792    1079 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: E0429 19:47:04.284274    1079 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/348589ed-de0b-4408-b332-59ce536cf2e4-config-volume podName:348589ed-de0b-4408-b332-59ce536cf2e4 nodeName:}" failed. No retries permitted until 2024-04-29 19:47:04.784239729 +0000 UTC m=+6.794250589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/348589ed-de0b-4408-b332-59ce536cf2e4-config-volume") pod "coredns-6d4b75cb6d-n967p" (UID: "348589ed-de0b-4408-b332-59ce536cf2e4") : object "kube-system"/"coredns" not registered
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: E0429 19:47:04.786697    1079 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 29 19:47:04 test-preload-031254 kubelet[1079]: E0429 19:47:04.786759    1079 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/348589ed-de0b-4408-b332-59ce536cf2e4-config-volume podName:348589ed-de0b-4408-b332-59ce536cf2e4 nodeName:}" failed. No retries permitted until 2024-04-29 19:47:05.786744037 +0000 UTC m=+7.796754897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/348589ed-de0b-4408-b332-59ce536cf2e4-config-volume") pod "coredns-6d4b75cb6d-n967p" (UID: "348589ed-de0b-4408-b332-59ce536cf2e4") : object "kube-system"/"coredns" not registered
	Apr 29 19:47:05 test-preload-031254 kubelet[1079]: E0429 19:47:05.797364    1079 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 29 19:47:05 test-preload-031254 kubelet[1079]: E0429 19:47:05.797528    1079 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/348589ed-de0b-4408-b332-59ce536cf2e4-config-volume podName:348589ed-de0b-4408-b332-59ce536cf2e4 nodeName:}" failed. No retries permitted until 2024-04-29 19:47:07.797450675 +0000 UTC m=+9.807461535 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/348589ed-de0b-4408-b332-59ce536cf2e4-config-volume") pod "coredns-6d4b75cb6d-n967p" (UID: "348589ed-de0b-4408-b332-59ce536cf2e4") : object "kube-system"/"coredns" not registered
	Apr 29 19:47:06 test-preload-031254 kubelet[1079]: E0429 19:47:06.258634    1079 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-n967p" podUID=348589ed-de0b-4408-b332-59ce536cf2e4
	Apr 29 19:47:07 test-preload-031254 kubelet[1079]: E0429 19:47:07.818691    1079 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 29 19:47:07 test-preload-031254 kubelet[1079]: E0429 19:47:07.818885    1079 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/348589ed-de0b-4408-b332-59ce536cf2e4-config-volume podName:348589ed-de0b-4408-b332-59ce536cf2e4 nodeName:}" failed. No retries permitted until 2024-04-29 19:47:11.818865774 +0000 UTC m=+13.828876649 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/348589ed-de0b-4408-b332-59ce536cf2e4-config-volume") pod "coredns-6d4b75cb6d-n967p" (UID: "348589ed-de0b-4408-b332-59ce536cf2e4") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [9ff65ba0c988e6c2fd413cdf53ae8fd284e0632cf1ca7c5863151eb0fccd8c89] <==
	I0429 19:47:05.594657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-031254 -n test-preload-031254
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-031254 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-031254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-031254
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-031254: (1.161629164s)
--- FAIL: TestPreload (267.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (403.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-935578 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-935578 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m32.111220873s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-935578] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-935578" primary control-plane node in "kubernetes-upgrade-935578" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:49:16.675082   55320 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:49:16.675199   55320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:49:16.675207   55320 out.go:304] Setting ErrFile to fd 2...
	I0429 19:49:16.675211   55320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:49:16.675421   55320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:49:16.676381   55320 out.go:298] Setting JSON to false
	I0429 19:49:16.677102   55320 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5455,"bootTime":1714414702,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 19:49:16.677168   55320 start.go:139] virtualization: kvm guest
	I0429 19:49:16.678881   55320 out.go:177] * [kubernetes-upgrade-935578] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 19:49:16.681138   55320 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:49:16.680341   55320 notify.go:220] Checking for updates...
	I0429 19:49:16.684179   55320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:49:16.686271   55320 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:49:16.688998   55320 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:49:16.691007   55320 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 19:49:16.693204   55320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:49:16.694648   55320 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:49:16.736977   55320 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 19:49:16.738672   55320 start.go:297] selected driver: kvm2
	I0429 19:49:16.738691   55320 start.go:901] validating driver "kvm2" against <nil>
	I0429 19:49:16.738705   55320 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:49:16.739756   55320 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:49:16.739863   55320 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 19:49:16.756301   55320 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 19:49:16.756368   55320 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 19:49:16.756609   55320 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 19:49:16.756670   55320 cni.go:84] Creating CNI manager for ""
	I0429 19:49:16.756687   55320 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:49:16.756704   55320 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 19:49:16.756772   55320 start.go:340] cluster config:
	{Name:kubernetes-upgrade-935578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-935578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:49:16.756889   55320 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:49:16.758641   55320 out.go:177] * Starting "kubernetes-upgrade-935578" primary control-plane node in "kubernetes-upgrade-935578" cluster
	I0429 19:49:16.759754   55320 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 19:49:16.759790   55320 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0429 19:49:16.759803   55320 cache.go:56] Caching tarball of preloaded images
	I0429 19:49:16.759895   55320 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 19:49:16.759909   55320 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0429 19:49:16.760330   55320 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/config.json ...
	I0429 19:49:16.760362   55320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/config.json: {Name:mk95013e89952dcc76f89f060af06c15f2f293ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:49:16.760522   55320 start.go:360] acquireMachinesLock for kubernetes-upgrade-935578: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:49:16.760565   55320 start.go:364] duration metric: took 22.535µs to acquireMachinesLock for "kubernetes-upgrade-935578"
	I0429 19:49:16.760588   55320 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-935578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-935578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:49:16.760669   55320 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 19:49:16.762229   55320 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 19:49:16.762386   55320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:49:16.762432   55320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:49:16.778839   55320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39205
	I0429 19:49:16.779381   55320 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:49:16.780066   55320 main.go:141] libmachine: Using API Version  1
	I0429 19:49:16.780092   55320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:49:16.780537   55320 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:49:16.780741   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetMachineName
	I0429 19:49:16.780926   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:49:16.781100   55320 start.go:159] libmachine.API.Create for "kubernetes-upgrade-935578" (driver="kvm2")
	I0429 19:49:16.781129   55320 client.go:168] LocalClient.Create starting
	I0429 19:49:16.781158   55320 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 19:49:16.781192   55320 main.go:141] libmachine: Decoding PEM data...
	I0429 19:49:16.781210   55320 main.go:141] libmachine: Parsing certificate...
	I0429 19:49:16.781270   55320 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 19:49:16.781300   55320 main.go:141] libmachine: Decoding PEM data...
	I0429 19:49:16.781317   55320 main.go:141] libmachine: Parsing certificate...
	I0429 19:49:16.781342   55320 main.go:141] libmachine: Running pre-create checks...
	I0429 19:49:16.781354   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .PreCreateCheck
	I0429 19:49:16.781711   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetConfigRaw
	I0429 19:49:16.782167   55320 main.go:141] libmachine: Creating machine...
	I0429 19:49:16.782200   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .Create
	I0429 19:49:16.782332   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Creating KVM machine...
	I0429 19:49:16.783723   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found existing default KVM network
	I0429 19:49:16.784419   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:16.784265   55391 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1c0}
	I0429 19:49:16.784451   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | created network xml: 
	I0429 19:49:16.784467   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | <network>
	I0429 19:49:16.784476   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG |   <name>mk-kubernetes-upgrade-935578</name>
	I0429 19:49:16.784493   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG |   <dns enable='no'/>
	I0429 19:49:16.784504   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG |   
	I0429 19:49:16.784521   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 19:49:16.784537   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG |     <dhcp>
	I0429 19:49:16.784552   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 19:49:16.784563   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG |     </dhcp>
	I0429 19:49:16.784581   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG |   </ip>
	I0429 19:49:16.784593   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG |   
	I0429 19:49:16.784606   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | </network>
	I0429 19:49:16.784615   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | 
	I0429 19:49:16.790192   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | trying to create private KVM network mk-kubernetes-upgrade-935578 192.168.39.0/24...
	I0429 19:49:16.869760   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578 ...
	I0429 19:49:16.869801   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 19:49:16.869816   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | private KVM network mk-kubernetes-upgrade-935578 192.168.39.0/24 created
	I0429 19:49:16.869837   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:16.869694   55391 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:49:16.869859   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 19:49:17.093523   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:17.093388   55391 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578/id_rsa...
	I0429 19:49:17.201769   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:17.201619   55391 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578/kubernetes-upgrade-935578.rawdisk...
	I0429 19:49:17.201800   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Writing magic tar header
	I0429 19:49:17.201812   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Writing SSH key tar header
	I0429 19:49:17.201821   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:17.201733   55391 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578 ...
	I0429 19:49:17.201832   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578
	I0429 19:49:17.201925   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578 (perms=drwx------)
	I0429 19:49:17.201954   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 19:49:17.201973   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 19:49:17.201986   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:49:17.202013   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 19:49:17.202029   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 19:49:17.202046   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Checking permissions on dir: /home/jenkins
	I0429 19:49:17.202090   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Checking permissions on dir: /home
	I0429 19:49:17.202112   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 19:49:17.202128   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 19:49:17.202143   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 19:49:17.202157   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 19:49:17.202172   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Skipping /home - not owner
	I0429 19:49:17.202185   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Creating domain...
	I0429 19:49:17.203214   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) define libvirt domain using xml: 
	I0429 19:49:17.203255   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) <domain type='kvm'>
	I0429 19:49:17.203267   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)   <name>kubernetes-upgrade-935578</name>
	I0429 19:49:17.203278   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)   <memory unit='MiB'>2200</memory>
	I0429 19:49:17.203287   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)   <vcpu>2</vcpu>
	I0429 19:49:17.203298   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)   <features>
	I0429 19:49:17.203306   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <acpi/>
	I0429 19:49:17.203313   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <apic/>
	I0429 19:49:17.203321   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <pae/>
	I0429 19:49:17.203330   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     
	I0429 19:49:17.203338   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)   </features>
	I0429 19:49:17.203342   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)   <cpu mode='host-passthrough'>
	I0429 19:49:17.203351   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)   
	I0429 19:49:17.203355   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)   </cpu>
	I0429 19:49:17.203363   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)   <os>
	I0429 19:49:17.203368   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <type>hvm</type>
	I0429 19:49:17.203378   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <boot dev='cdrom'/>
	I0429 19:49:17.203387   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <boot dev='hd'/>
	I0429 19:49:17.203399   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <bootmenu enable='no'/>
	I0429 19:49:17.203414   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)   </os>
	I0429 19:49:17.203427   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)   <devices>
	I0429 19:49:17.203438   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <disk type='file' device='cdrom'>
	I0429 19:49:17.203455   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578/boot2docker.iso'/>
	I0429 19:49:17.203466   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <target dev='hdc' bus='scsi'/>
	I0429 19:49:17.203475   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <readonly/>
	I0429 19:49:17.203490   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     </disk>
	I0429 19:49:17.203504   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <disk type='file' device='disk'>
	I0429 19:49:17.203517   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 19:49:17.203534   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578/kubernetes-upgrade-935578.rawdisk'/>
	I0429 19:49:17.203548   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <target dev='hda' bus='virtio'/>
	I0429 19:49:17.203581   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     </disk>
	I0429 19:49:17.203594   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <interface type='network'>
	I0429 19:49:17.203601   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <source network='mk-kubernetes-upgrade-935578'/>
	I0429 19:49:17.203609   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <model type='virtio'/>
	I0429 19:49:17.203640   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     </interface>
	I0429 19:49:17.203669   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <interface type='network'>
	I0429 19:49:17.203685   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <source network='default'/>
	I0429 19:49:17.203699   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <model type='virtio'/>
	I0429 19:49:17.203711   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     </interface>
	I0429 19:49:17.203725   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <serial type='pty'>
	I0429 19:49:17.203737   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <target port='0'/>
	I0429 19:49:17.203755   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     </serial>
	I0429 19:49:17.203768   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <console type='pty'>
	I0429 19:49:17.203779   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <target type='serial' port='0'/>
	I0429 19:49:17.203793   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     </console>
	I0429 19:49:17.203807   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     <rng model='virtio'>
	I0429 19:49:17.203823   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)       <backend model='random'>/dev/random</backend>
	I0429 19:49:17.203841   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     </rng>
	I0429 19:49:17.203855   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     
	I0429 19:49:17.203867   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)     
	I0429 19:49:17.203878   55320 main.go:141] libmachine: (kubernetes-upgrade-935578)   </devices>
	I0429 19:49:17.203891   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) </domain>
	I0429 19:49:17.203912   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) 
	I0429 19:49:17.208181   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:61:f3:3f in network default
	I0429 19:49:17.208762   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Ensuring networks are active...
	I0429 19:49:17.208789   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:17.209480   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Ensuring network default is active
	I0429 19:49:17.209735   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Ensuring network mk-kubernetes-upgrade-935578 is active
	I0429 19:49:17.210350   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Getting domain xml...
	I0429 19:49:17.211122   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Creating domain...
	I0429 19:49:18.403935   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Waiting to get IP...
	I0429 19:49:18.404945   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:18.405421   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:18.405468   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:18.405416   55391 retry.go:31] will retry after 215.071445ms: waiting for machine to come up
	I0429 19:49:18.621948   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:18.622440   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:18.622472   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:18.622409   55391 retry.go:31] will retry after 242.774158ms: waiting for machine to come up
	I0429 19:49:18.866894   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:18.867311   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:18.867351   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:18.867258   55391 retry.go:31] will retry after 427.707073ms: waiting for machine to come up
	I0429 19:49:19.296902   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:19.297330   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:19.297362   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:19.297276   55391 retry.go:31] will retry after 487.780709ms: waiting for machine to come up
	I0429 19:49:19.787066   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:19.787539   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:19.787568   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:19.787497   55391 retry.go:31] will retry after 494.901913ms: waiting for machine to come up
	I0429 19:49:20.284286   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:20.284757   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:20.284781   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:20.284704   55391 retry.go:31] will retry after 603.350077ms: waiting for machine to come up
	I0429 19:49:20.889347   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:20.889806   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:20.889854   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:20.889759   55391 retry.go:31] will retry after 1.09977196s: waiting for machine to come up
	I0429 19:49:21.991551   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:21.991935   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:21.991965   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:21.991887   55391 retry.go:31] will retry after 1.138603871s: waiting for machine to come up
	I0429 19:49:23.132141   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:23.132616   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:23.132640   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:23.132586   55391 retry.go:31] will retry after 1.692008788s: waiting for machine to come up
	I0429 19:49:24.827768   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:24.828163   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:24.828189   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:24.828116   55391 retry.go:31] will retry after 1.896680394s: waiting for machine to come up
	I0429 19:49:26.727022   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:26.727486   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:26.727517   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:26.727431   55391 retry.go:31] will retry after 1.806641425s: waiting for machine to come up
	I0429 19:49:28.535432   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:28.535788   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:28.535811   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:28.535756   55391 retry.go:31] will retry after 2.506815376s: waiting for machine to come up
	I0429 19:49:31.045513   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:31.045986   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:31.046019   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:31.045938   55391 retry.go:31] will retry after 4.116531929s: waiting for machine to come up
	I0429 19:49:35.166486   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:35.166891   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find current IP address of domain kubernetes-upgrade-935578 in network mk-kubernetes-upgrade-935578
	I0429 19:49:35.166915   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | I0429 19:49:35.166869   55391 retry.go:31] will retry after 4.977330337s: waiting for machine to come up
	I0429 19:49:40.147719   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.148154   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Found IP for machine: 192.168.39.125
	I0429 19:49:40.148177   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Reserving static IP address...
	I0429 19:49:40.148187   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has current primary IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.148569   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-935578", mac: "52:54:00:8a:1f:ba", ip: "192.168.39.125"} in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.219518   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Reserved static IP address: 192.168.39.125
	I0429 19:49:40.219548   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Waiting for SSH to be available...
	I0429 19:49:40.219558   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Getting to WaitForSSH function...
	I0429 19:49:40.221974   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.222428   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:40.222451   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.222650   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Using SSH client type: external
	I0429 19:49:40.222676   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578/id_rsa (-rw-------)
	I0429 19:49:40.222722   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:49:40.222738   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | About to run SSH command:
	I0429 19:49:40.222755   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | exit 0
	I0429 19:49:40.346130   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | SSH cmd err, output: <nil>: 
	I0429 19:49:40.346418   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) KVM machine creation complete!
	I0429 19:49:40.346753   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetConfigRaw
	I0429 19:49:40.347250   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:49:40.347443   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:49:40.347572   55320 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 19:49:40.347583   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetState
	I0429 19:49:40.348771   55320 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 19:49:40.348785   55320 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 19:49:40.348791   55320 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 19:49:40.348797   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:49:40.350971   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.351336   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:40.351367   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.351465   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHPort
	I0429 19:49:40.351617   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:40.351756   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:40.351880   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHUsername
	I0429 19:49:40.352033   55320 main.go:141] libmachine: Using SSH client type: native
	I0429 19:49:40.352239   55320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0429 19:49:40.352251   55320 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 19:49:40.453414   55320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:49:40.453446   55320 main.go:141] libmachine: Detecting the provisioner...
	I0429 19:49:40.453454   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:49:40.456097   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.456449   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:40.456473   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.456645   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHPort
	I0429 19:49:40.456829   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:40.457018   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:40.457141   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHUsername
	I0429 19:49:40.457317   55320 main.go:141] libmachine: Using SSH client type: native
	I0429 19:49:40.457515   55320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0429 19:49:40.457527   55320 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 19:49:40.559206   55320 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 19:49:40.559278   55320 main.go:141] libmachine: found compatible host: buildroot
	I0429 19:49:40.559291   55320 main.go:141] libmachine: Provisioning with buildroot...
	I0429 19:49:40.559304   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetMachineName
	I0429 19:49:40.559557   55320 buildroot.go:166] provisioning hostname "kubernetes-upgrade-935578"
	I0429 19:49:40.559581   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetMachineName
	I0429 19:49:40.559758   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:49:40.562240   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.562621   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:40.562651   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.562806   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHPort
	I0429 19:49:40.562970   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:40.563134   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:40.563254   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHUsername
	I0429 19:49:40.563395   55320 main.go:141] libmachine: Using SSH client type: native
	I0429 19:49:40.563565   55320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0429 19:49:40.563581   55320 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-935578 && echo "kubernetes-upgrade-935578" | sudo tee /etc/hostname
	I0429 19:49:40.681919   55320 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-935578
	
	I0429 19:49:40.681955   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:49:40.684674   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.685019   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:40.685052   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.685179   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHPort
	I0429 19:49:40.685395   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:40.685561   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:40.685714   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHUsername
	I0429 19:49:40.685849   55320 main.go:141] libmachine: Using SSH client type: native
	I0429 19:49:40.686000   55320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0429 19:49:40.686017   55320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-935578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-935578/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-935578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:49:40.796932   55320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:49:40.796960   55320 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:49:40.796991   55320 buildroot.go:174] setting up certificates
	I0429 19:49:40.797003   55320 provision.go:84] configureAuth start
	I0429 19:49:40.797015   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetMachineName
	I0429 19:49:40.797270   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetIP
	I0429 19:49:40.799961   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.800308   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:40.800336   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.800437   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:49:40.802751   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.803055   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:40.803087   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.803210   55320 provision.go:143] copyHostCerts
	I0429 19:49:40.803271   55320 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:49:40.803283   55320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:49:40.803356   55320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:49:40.803455   55320 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:49:40.803465   55320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:49:40.803501   55320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:49:40.803607   55320 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:49:40.803618   55320 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:49:40.803653   55320 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:49:40.803729   55320 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-935578 san=[127.0.0.1 192.168.39.125 kubernetes-upgrade-935578 localhost minikube]
	I0429 19:49:40.977190   55320 provision.go:177] copyRemoteCerts
	I0429 19:49:40.977257   55320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:49:40.977294   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:49:40.979615   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.979920   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:40.979951   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:40.980092   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHPort
	I0429 19:49:40.980275   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:40.980473   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHUsername
	I0429 19:49:40.980616   55320 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578/id_rsa Username:docker}
	I0429 19:49:41.060324   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 19:49:41.086880   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:49:41.113041   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0429 19:49:41.140064   55320 provision.go:87] duration metric: took 343.051919ms to configureAuth
	I0429 19:49:41.140086   55320 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:49:41.140236   55320 config.go:182] Loaded profile config "kubernetes-upgrade-935578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 19:49:41.140324   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:49:41.143083   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.143457   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:41.143476   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.143636   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHPort
	I0429 19:49:41.143788   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:41.143966   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:41.144094   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHUsername
	I0429 19:49:41.144242   55320 main.go:141] libmachine: Using SSH client type: native
	I0429 19:49:41.144431   55320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0429 19:49:41.144451   55320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:49:41.414956   55320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:49:41.414994   55320 main.go:141] libmachine: Checking connection to Docker...
	I0429 19:49:41.415003   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetURL
	I0429 19:49:41.416173   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Using libvirt version 6000000
	I0429 19:49:41.417961   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.418323   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:41.418351   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.418527   55320 main.go:141] libmachine: Docker is up and running!
	I0429 19:49:41.418548   55320 main.go:141] libmachine: Reticulating splines...
	I0429 19:49:41.418554   55320 client.go:171] duration metric: took 24.63741898s to LocalClient.Create
	I0429 19:49:41.418574   55320 start.go:167] duration metric: took 24.63747731s to libmachine.API.Create "kubernetes-upgrade-935578"
	I0429 19:49:41.418583   55320 start.go:293] postStartSetup for "kubernetes-upgrade-935578" (driver="kvm2")
	I0429 19:49:41.418592   55320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:49:41.418618   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:49:41.418859   55320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:49:41.418882   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:49:41.420832   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.421085   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:41.421120   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.421256   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHPort
	I0429 19:49:41.421417   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:41.421571   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHUsername
	I0429 19:49:41.421708   55320 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578/id_rsa Username:docker}
	I0429 19:49:41.505482   55320 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:49:41.510188   55320 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:49:41.510207   55320 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:49:41.510274   55320 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:49:41.510371   55320 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:49:41.510481   55320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:49:41.521228   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:49:41.546705   55320 start.go:296] duration metric: took 128.111386ms for postStartSetup
	I0429 19:49:41.546745   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetConfigRaw
	I0429 19:49:41.547265   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetIP
	I0429 19:49:41.550000   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.550392   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:41.550419   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.550663   55320 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/config.json ...
	I0429 19:49:41.550848   55320 start.go:128] duration metric: took 24.790170022s to createHost
	I0429 19:49:41.550873   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:49:41.553016   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.553340   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:41.553374   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.553530   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHPort
	I0429 19:49:41.553713   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:41.553900   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:41.554020   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHUsername
	I0429 19:49:41.554180   55320 main.go:141] libmachine: Using SSH client type: native
	I0429 19:49:41.554374   55320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0429 19:49:41.554393   55320 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 19:49:41.655224   55320 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714420181.637255096
	
	I0429 19:49:41.655249   55320 fix.go:216] guest clock: 1714420181.637255096
	I0429 19:49:41.655256   55320 fix.go:229] Guest: 2024-04-29 19:49:41.637255096 +0000 UTC Remote: 2024-04-29 19:49:41.550859484 +0000 UTC m=+24.945634123 (delta=86.395612ms)
	I0429 19:49:41.655274   55320 fix.go:200] guest clock delta is within tolerance: 86.395612ms
	I0429 19:49:41.655279   55320 start.go:83] releasing machines lock for "kubernetes-upgrade-935578", held for 24.894703129s
	I0429 19:49:41.655311   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:49:41.655598   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetIP
	I0429 19:49:41.658215   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.658577   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:41.658606   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.658774   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:49:41.659229   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:49:41.659393   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:49:41.659480   55320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:49:41.659516   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:49:41.659616   55320 ssh_runner.go:195] Run: cat /version.json
	I0429 19:49:41.659635   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:49:41.662026   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.662211   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.662406   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:41.662432   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.662553   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHPort
	I0429 19:49:41.662655   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:41.662687   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:41.662728   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:41.662891   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHUsername
	I0429 19:49:41.662967   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHPort
	I0429 19:49:41.663077   55320 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578/id_rsa Username:docker}
	I0429 19:49:41.663131   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:49:41.663257   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHUsername
	I0429 19:49:41.663382   55320 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578/id_rsa Username:docker}
	I0429 19:49:41.768265   55320 ssh_runner.go:195] Run: systemctl --version
	I0429 19:49:41.775287   55320 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:49:41.952397   55320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:49:41.958915   55320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:49:41.958992   55320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:49:41.976861   55320 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:49:41.976880   55320 start.go:494] detecting cgroup driver to use...
	I0429 19:49:41.976950   55320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:49:41.995385   55320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:49:42.010837   55320 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:49:42.010901   55320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:49:42.025501   55320 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:49:42.040268   55320 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:49:42.164448   55320 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:49:42.307744   55320 docker.go:233] disabling docker service ...
	I0429 19:49:42.307833   55320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:49:42.324869   55320 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:49:42.339887   55320 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:49:42.480261   55320 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:49:42.618867   55320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:49:42.634327   55320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:49:42.654710   55320 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 19:49:42.654775   55320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:49:42.665987   55320 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:49:42.666056   55320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:49:42.679220   55320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:49:42.694301   55320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:49:42.706855   55320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:49:42.719051   55320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:49:42.729447   55320 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 19:49:42.729496   55320 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 19:49:42.743722   55320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:49:42.754404   55320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:49:42.887369   55320 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:49:43.057032   55320 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:49:43.057090   55320 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:49:43.063144   55320 start.go:562] Will wait 60s for crictl version
	I0429 19:49:43.063202   55320 ssh_runner.go:195] Run: which crictl
	I0429 19:49:43.067533   55320 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:49:43.114098   55320 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:49:43.114188   55320 ssh_runner.go:195] Run: crio --version
	I0429 19:49:43.144232   55320 ssh_runner.go:195] Run: crio --version
	I0429 19:49:43.185965   55320 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0429 19:49:43.187280   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetIP
	I0429 19:49:43.190238   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:43.190670   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:49:32 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:49:43.190701   55320 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:49:43.190892   55320 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 19:49:43.195719   55320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:49:43.212935   55320 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-935578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-935578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 19:49:43.213042   55320 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 19:49:43.213105   55320 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:49:43.255162   55320 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 19:49:43.255237   55320 ssh_runner.go:195] Run: which lz4
	I0429 19:49:43.259794   55320 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0429 19:49:43.264411   55320 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 19:49:43.264443   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0429 19:49:45.312592   55320 crio.go:462] duration metric: took 2.052825328s to copy over tarball
	I0429 19:49:45.312674   55320 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 19:49:48.154641   55320 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.841935974s)
	I0429 19:49:48.154684   55320 crio.go:469] duration metric: took 2.842063911s to extract the tarball
	I0429 19:49:48.154698   55320 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 19:49:48.198980   55320 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:49:48.249550   55320 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 19:49:48.249576   55320 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 19:49:48.249631   55320 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:49:48.249661   55320 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 19:49:48.249677   55320 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 19:49:48.249704   55320 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0429 19:49:48.249760   55320 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 19:49:48.249853   55320 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0429 19:49:48.249912   55320 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0429 19:49:48.249975   55320 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 19:49:48.250971   55320 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 19:49:48.251376   55320 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0429 19:49:48.251382   55320 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0429 19:49:48.251377   55320 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 19:49:48.251376   55320 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 19:49:48.251376   55320 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:49:48.251447   55320 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0429 19:49:48.251448   55320 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 19:49:48.421066   55320 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0429 19:49:48.430576   55320 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0429 19:49:48.462543   55320 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0429 19:49:48.490632   55320 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0429 19:49:48.490688   55320 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 19:49:48.490733   55320 ssh_runner.go:195] Run: which crictl
	I0429 19:49:48.502405   55320 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0429 19:49:48.502457   55320 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 19:49:48.502507   55320 ssh_runner.go:195] Run: which crictl
	I0429 19:49:48.538251   55320 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0429 19:49:48.538289   55320 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0429 19:49:48.538454   55320 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0429 19:49:48.538489   55320 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0429 19:49:48.538523   55320 ssh_runner.go:195] Run: which crictl
	I0429 19:49:48.548330   55320 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0429 19:49:48.623493   55320 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0429 19:49:48.624986   55320 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0429 19:49:48.660374   55320 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 19:49:48.677546   55320 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0429 19:49:48.677564   55320 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0429 19:49:48.677566   55320 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0429 19:49:48.677603   55320 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0429 19:49:48.677554   55320 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0429 19:49:48.677669   55320 ssh_runner.go:195] Run: which crictl
	I0429 19:49:48.677674   55320 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 19:49:48.677731   55320 ssh_runner.go:195] Run: which crictl
	I0429 19:49:48.691582   55320 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0429 19:49:48.748243   55320 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0429 19:49:48.748294   55320 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 19:49:48.748345   55320 ssh_runner.go:195] Run: which crictl
	I0429 19:49:48.753045   55320 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0429 19:49:48.753074   55320 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0429 19:49:48.753784   55320 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0429 19:49:48.788562   55320 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0429 19:49:48.788608   55320 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0429 19:49:48.788625   55320 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 19:49:48.788647   55320 ssh_runner.go:195] Run: which crictl
	I0429 19:49:48.828909   55320 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0429 19:49:48.828939   55320 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0429 19:49:48.829105   55320 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0429 19:49:48.859249   55320 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0429 19:49:48.881257   55320 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0429 19:49:49.036961   55320 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:49:49.186727   55320 cache_images.go:92] duration metric: took 937.133274ms to LoadCachedImages
	W0429 19:49:49.186839   55320 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0429 19:49:49.186859   55320 kubeadm.go:928] updating node { 192.168.39.125 8443 v1.20.0 crio true true} ...
	I0429 19:49:49.187033   55320 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-935578 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-935578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:49:49.187125   55320 ssh_runner.go:195] Run: crio config
	I0429 19:49:49.237622   55320 cni.go:84] Creating CNI manager for ""
	I0429 19:49:49.237641   55320 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:49:49.237649   55320 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 19:49:49.237666   55320 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-935578 NodeName:kubernetes-upgrade-935578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 19:49:49.237854   55320 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-935578"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 19:49:49.237924   55320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 19:49:49.249541   55320 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 19:49:49.249614   55320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 19:49:49.261208   55320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0429 19:49:49.281002   55320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:49:49.300796   55320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0429 19:49:49.320379   55320 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0429 19:49:49.324854   55320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:49:49.339455   55320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:49:49.481338   55320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:49:49.506083   55320 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578 for IP: 192.168.39.125
	I0429 19:49:49.506120   55320 certs.go:194] generating shared ca certs ...
	I0429 19:49:49.506138   55320 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:49:49.506274   55320 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:49:49.506310   55320 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:49:49.506316   55320 certs.go:256] generating profile certs ...
	I0429 19:49:49.506362   55320 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/client.key
	I0429 19:49:49.506375   55320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/client.crt with IP's: []
	I0429 19:49:49.562112   55320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/client.crt ...
	I0429 19:49:49.562139   55320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/client.crt: {Name:mk1b8a487b422dcef22dc8d893d36f7273a2c517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:49:49.562296   55320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/client.key ...
	I0429 19:49:49.562309   55320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/client.key: {Name:mk7831a38c11fd057701d9bce35fca84f1f47c33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:49:49.562388   55320 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/apiserver.key.39107e77
	I0429 19:49:49.562412   55320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/apiserver.crt.39107e77 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.125]
	I0429 19:49:49.653991   55320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/apiserver.crt.39107e77 ...
	I0429 19:49:49.654016   55320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/apiserver.crt.39107e77: {Name:mk735aeb1d0f3e3247d64d0e0cae9abd2c149aed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:49:49.654187   55320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/apiserver.key.39107e77 ...
	I0429 19:49:49.654205   55320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/apiserver.key.39107e77: {Name:mkf2ca7ba083ac58b3915c20969dd1e20b4853ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:49:49.654278   55320 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/apiserver.crt.39107e77 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/apiserver.crt
	I0429 19:49:49.654361   55320 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/apiserver.key.39107e77 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/apiserver.key
	I0429 19:49:49.654426   55320 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/proxy-client.key
	I0429 19:49:49.654444   55320 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/proxy-client.crt with IP's: []
	I0429 19:49:49.839850   55320 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/proxy-client.crt ...
	I0429 19:49:49.839878   55320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/proxy-client.crt: {Name:mke6e863fb487f2d5135b68d5cc9e168909c3147 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:49:49.840024   55320 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/proxy-client.key ...
	I0429 19:49:49.840038   55320 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/proxy-client.key: {Name:mkd8ffa77f8aaf87b3c5af9a8b2015dc38f584b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:49:49.840193   55320 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:49:49.840232   55320 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:49:49.840242   55320 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:49:49.840273   55320 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:49:49.840295   55320 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:49:49.840315   55320 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:49:49.840349   55320 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:49:49.840869   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:49:49.868461   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:49:49.894990   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:49:49.921050   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:49:49.946327   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0429 19:49:49.972725   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 19:49:49.999693   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:49:50.025547   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 19:49:50.052647   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:49:50.078591   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:49:50.104980   55320 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:49:50.132962   55320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 19:49:50.152264   55320 ssh_runner.go:195] Run: openssl version
	I0429 19:49:50.158786   55320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:49:50.171686   55320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:49:50.176814   55320 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:49:50.176859   55320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:49:50.183159   55320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:49:50.196025   55320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:49:50.208621   55320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:49:50.213660   55320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:49:50.213696   55320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:49:50.219925   55320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:49:50.232616   55320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:49:50.245237   55320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:49:50.250164   55320 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:49:50.250228   55320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:49:50.256570   55320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:49:50.269266   55320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:49:50.273884   55320 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:49:50.273938   55320 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-935578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-935578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:49:50.274023   55320 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 19:49:50.274090   55320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 19:49:50.314514   55320 cri.go:89] found id: ""
	I0429 19:49:50.314610   55320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 19:49:50.328866   55320 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 19:49:50.340609   55320 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 19:49:50.353173   55320 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 19:49:50.353193   55320 kubeadm.go:156] found existing configuration files:
	
	I0429 19:49:50.353244   55320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 19:49:50.364274   55320 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 19:49:50.364323   55320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 19:49:50.378878   55320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 19:49:50.392750   55320 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 19:49:50.392809   55320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 19:49:50.407947   55320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 19:49:50.419416   55320 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 19:49:50.419477   55320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 19:49:50.431425   55320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 19:49:50.441962   55320 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 19:49:50.442021   55320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 19:49:50.452938   55320 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 19:49:50.700845   55320 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 19:51:48.504935   55320 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 19:51:48.505263   55320 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 19:51:48.506652   55320 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 19:51:48.506753   55320 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 19:51:48.506929   55320 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 19:51:48.507163   55320 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 19:51:48.507425   55320 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 19:51:48.507601   55320 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 19:51:48.510598   55320 out.go:204]   - Generating certificates and keys ...
	I0429 19:51:48.510713   55320 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 19:51:48.510811   55320 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 19:51:48.510907   55320 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 19:51:48.510988   55320 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 19:51:48.511080   55320 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 19:51:48.511156   55320 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 19:51:48.511238   55320 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 19:51:48.511447   55320 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-935578 localhost] and IPs [192.168.39.125 127.0.0.1 ::1]
	I0429 19:51:48.511523   55320 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 19:51:48.511689   55320 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-935578 localhost] and IPs [192.168.39.125 127.0.0.1 ::1]
	I0429 19:51:48.511785   55320 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 19:51:48.511886   55320 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 19:51:48.511956   55320 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 19:51:48.512042   55320 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 19:51:48.512101   55320 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 19:51:48.512179   55320 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 19:51:48.512279   55320 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 19:51:48.512388   55320 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 19:51:48.512551   55320 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 19:51:48.512661   55320 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 19:51:48.512732   55320 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 19:51:48.512838   55320 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 19:51:48.514332   55320 out.go:204]   - Booting up control plane ...
	I0429 19:51:48.514425   55320 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 19:51:48.514527   55320 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 19:51:48.514628   55320 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 19:51:48.514753   55320 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 19:51:48.514882   55320 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 19:51:48.514937   55320 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 19:51:48.515013   55320 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:51:48.515270   55320 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:51:48.515378   55320 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:51:48.515621   55320 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:51:48.515716   55320 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:51:48.515968   55320 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:51:48.516063   55320 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:51:48.516313   55320 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:51:48.516412   55320 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:51:48.516665   55320 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:51:48.516678   55320 kubeadm.go:309] 
	I0429 19:51:48.516732   55320 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 19:51:48.516789   55320 kubeadm.go:309] 		timed out waiting for the condition
	I0429 19:51:48.516799   55320 kubeadm.go:309] 
	I0429 19:51:48.516852   55320 kubeadm.go:309] 	This error is likely caused by:
	I0429 19:51:48.516898   55320 kubeadm.go:309] 		- The kubelet is not running
	I0429 19:51:48.517040   55320 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 19:51:48.517051   55320 kubeadm.go:309] 
	I0429 19:51:48.517192   55320 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 19:51:48.517239   55320 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 19:51:48.517285   55320 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 19:51:48.517294   55320 kubeadm.go:309] 
	I0429 19:51:48.517439   55320 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 19:51:48.517555   55320 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 19:51:48.517565   55320 kubeadm.go:309] 
	I0429 19:51:48.517687   55320 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 19:51:48.517806   55320 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 19:51:48.517916   55320 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 19:51:48.518017   55320 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0429 19:51:48.518179   55320 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-935578 localhost] and IPs [192.168.39.125 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-935578 localhost] and IPs [192.168.39.125 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-935578 localhost] and IPs [192.168.39.125 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-935578 localhost] and IPs [192.168.39.125 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0429 19:51:48.518240   55320 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 19:51:48.518506   55320 kubeadm.go:309] 
	I0429 19:51:51.279063   55320 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760793264s)
	I0429 19:51:51.279150   55320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:51:51.300105   55320 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 19:51:51.312714   55320 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 19:51:51.312738   55320 kubeadm.go:156] found existing configuration files:
	
	I0429 19:51:51.312790   55320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 19:51:51.326156   55320 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 19:51:51.326226   55320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 19:51:51.341004   55320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 19:51:51.352370   55320 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 19:51:51.352442   55320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 19:51:51.366591   55320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 19:51:51.379861   55320 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 19:51:51.379936   55320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 19:51:51.391075   55320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 19:51:51.401565   55320 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 19:51:51.401645   55320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 19:51:51.416924   55320 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 19:51:51.498666   55320 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 19:51:51.498745   55320 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 19:51:51.681970   55320 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 19:51:51.682141   55320 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 19:51:51.682310   55320 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 19:51:51.949278   55320 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 19:51:52.067501   55320 out.go:204]   - Generating certificates and keys ...
	I0429 19:51:52.067667   55320 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 19:51:52.067757   55320 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 19:51:52.067880   55320 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 19:51:52.068011   55320 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 19:51:52.068129   55320 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 19:51:52.068203   55320 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 19:51:52.068309   55320 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 19:51:52.068389   55320 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 19:51:52.068511   55320 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 19:51:52.068618   55320 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 19:51:52.068680   55320 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 19:51:52.068765   55320 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 19:51:52.159864   55320 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 19:51:52.276017   55320 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 19:51:52.486736   55320 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 19:51:52.815671   55320 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 19:51:52.833209   55320 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 19:51:52.834296   55320 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 19:51:52.834372   55320 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 19:51:53.012996   55320 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 19:51:53.150141   55320 out.go:204]   - Booting up control plane ...
	I0429 19:51:53.150281   55320 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 19:51:53.150362   55320 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 19:51:53.150432   55320 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 19:51:53.150497   55320 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 19:51:53.150620   55320 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 19:52:33.033674   55320 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 19:52:33.033800   55320 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:52:33.034085   55320 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:52:38.034890   55320 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:52:38.035205   55320 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:52:48.035974   55320 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:52:48.036291   55320 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:53:08.038575   55320 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:53:08.038877   55320 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:53:48.038822   55320 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:53:48.039143   55320 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:53:48.039169   55320 kubeadm.go:309] 
	I0429 19:53:48.039235   55320 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 19:53:48.039305   55320 kubeadm.go:309] 		timed out waiting for the condition
	I0429 19:53:48.039317   55320 kubeadm.go:309] 
	I0429 19:53:48.039361   55320 kubeadm.go:309] 	This error is likely caused by:
	I0429 19:53:48.039411   55320 kubeadm.go:309] 		- The kubelet is not running
	I0429 19:53:48.039565   55320 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 19:53:48.039576   55320 kubeadm.go:309] 
	I0429 19:53:48.039728   55320 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 19:53:48.039772   55320 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 19:53:48.039825   55320 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 19:53:48.039836   55320 kubeadm.go:309] 
	I0429 19:53:48.039967   55320 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 19:53:48.040081   55320 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 19:53:48.040091   55320 kubeadm.go:309] 
	I0429 19:53:48.040238   55320 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 19:53:48.040350   55320 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 19:53:48.040477   55320 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 19:53:48.040581   55320 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 19:53:48.040593   55320 kubeadm.go:309] 
	I0429 19:53:48.041238   55320 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 19:53:48.041379   55320 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 19:53:48.041479   55320 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 19:53:48.041631   55320 kubeadm.go:393] duration metric: took 3m57.767694068s to StartCluster
	I0429 19:53:48.041691   55320 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 19:53:48.041757   55320 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 19:53:48.092952   55320 cri.go:89] found id: ""
	I0429 19:53:48.092985   55320 logs.go:276] 0 containers: []
	W0429 19:53:48.092996   55320 logs.go:278] No container was found matching "kube-apiserver"
	I0429 19:53:48.093009   55320 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 19:53:48.093070   55320 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 19:53:48.133224   55320 cri.go:89] found id: ""
	I0429 19:53:48.133258   55320 logs.go:276] 0 containers: []
	W0429 19:53:48.133269   55320 logs.go:278] No container was found matching "etcd"
	I0429 19:53:48.133277   55320 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 19:53:48.133341   55320 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 19:53:48.179177   55320 cri.go:89] found id: ""
	I0429 19:53:48.179201   55320 logs.go:276] 0 containers: []
	W0429 19:53:48.179208   55320 logs.go:278] No container was found matching "coredns"
	I0429 19:53:48.179214   55320 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 19:53:48.179277   55320 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 19:53:48.223155   55320 cri.go:89] found id: ""
	I0429 19:53:48.223182   55320 logs.go:276] 0 containers: []
	W0429 19:53:48.223191   55320 logs.go:278] No container was found matching "kube-scheduler"
	I0429 19:53:48.223196   55320 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 19:53:48.223255   55320 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 19:53:48.264488   55320 cri.go:89] found id: ""
	I0429 19:53:48.264520   55320 logs.go:276] 0 containers: []
	W0429 19:53:48.264527   55320 logs.go:278] No container was found matching "kube-proxy"
	I0429 19:53:48.264533   55320 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 19:53:48.264592   55320 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 19:53:48.305453   55320 cri.go:89] found id: ""
	I0429 19:53:48.305482   55320 logs.go:276] 0 containers: []
	W0429 19:53:48.305491   55320 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 19:53:48.305499   55320 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 19:53:48.305561   55320 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 19:53:48.347340   55320 cri.go:89] found id: ""
	I0429 19:53:48.347375   55320 logs.go:276] 0 containers: []
	W0429 19:53:48.347386   55320 logs.go:278] No container was found matching "kindnet"
	I0429 19:53:48.347398   55320 logs.go:123] Gathering logs for kubelet ...
	I0429 19:53:48.347419   55320 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 19:53:48.400197   55320 logs.go:123] Gathering logs for dmesg ...
	I0429 19:53:48.400229   55320 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 19:53:48.415161   55320 logs.go:123] Gathering logs for describe nodes ...
	I0429 19:53:48.415189   55320 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 19:53:48.555089   55320 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 19:53:48.555121   55320 logs.go:123] Gathering logs for CRI-O ...
	I0429 19:53:48.555136   55320 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 19:53:48.654031   55320 logs.go:123] Gathering logs for container status ...
	I0429 19:53:48.654099   55320 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0429 19:53:48.699586   55320 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0429 19:53:48.699649   55320 out.go:239] * 
	* 
	W0429 19:53:48.699717   55320 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 19:53:48.699749   55320 out.go:239] * 
	* 
	W0429 19:53:48.700812   55320 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 19:53:48.703819   55320 out.go:177] 
	W0429 19:53:48.705034   55320 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 19:53:48.705093   55320 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0429 19:53:48.705120   55320 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0429 19:53:48.706817   55320 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-935578 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-935578
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-935578: (4.612424389s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-935578 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-935578 status --format={{.Host}}: exit status 7 (90.062966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-935578 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-935578 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.918987878s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-935578 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-935578 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-935578 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (120.830713ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-935578] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-935578
	    minikube start -p kubernetes-upgrade-935578 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9355782 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-935578 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-935578 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-935578 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.787039534s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-29 19:55:55.365412388 +0000 UTC m=+4605.012787526
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-935578 -n kubernetes-upgrade-935578
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-935578 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-935578 logs -n 25: (2.167144782s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo find                           | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo crio                           | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-870155                                     | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC | 29 Apr 24 19:54 UTC |
	| start   | -p pause-467472                                      | pause-467472              | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC | 29 Apr 24 19:54 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-407092                            | running-upgrade-407092    | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC | 29 Apr 24 19:54 UTC |
	| start   | -p cert-expiration-509508                            | cert-expiration-509508    | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC | 29 Apr 24 19:55 UTC |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-090341                         | force-systemd-flag-090341 | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC | 29 Apr 24 19:55 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-935578                         | kubernetes-upgrade-935578 | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-935578                         | kubernetes-upgrade-935578 | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC | 29 Apr 24 19:55 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p pause-467472                                      | pause-467472              | jenkins | v1.33.0 | 29 Apr 24 19:55 UTC | 29 Apr 24 19:55 UTC |
	| start   | -p cert-options-437743                               | cert-options-437743       | jenkins | v1.33.0 | 29 Apr 24 19:55 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-090341 ssh cat                    | force-systemd-flag-090341 | jenkins | v1.33.0 | 29 Apr 24 19:55 UTC | 29 Apr 24 19:55 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-090341                         | force-systemd-flag-090341 | jenkins | v1.33.0 | 29 Apr 24 19:55 UTC | 29 Apr 24 19:55 UTC |
	| start   | -p old-k8s-version-919612                            | old-k8s-version-919612    | jenkins | v1.33.0 | 29 Apr 24 19:55 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 19:55:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 19:55:41.380580   62888 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:55:41.380706   62888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:55:41.380717   62888 out.go:304] Setting ErrFile to fd 2...
	I0429 19:55:41.380724   62888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:55:41.381040   62888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:55:41.381708   62888 out.go:298] Setting JSON to false
	I0429 19:55:41.382690   62888 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5839,"bootTime":1714414702,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 19:55:41.382749   62888 start.go:139] virtualization: kvm guest
	I0429 19:55:41.385085   62888 out.go:177] * [old-k8s-version-919612] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 19:55:41.386943   62888 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:55:41.387020   62888 notify.go:220] Checking for updates...
	I0429 19:55:41.388480   62888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:55:41.390079   62888 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:55:41.391675   62888 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:55:41.393326   62888 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 19:55:41.396080   62888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:55:41.398229   62888 config.go:182] Loaded profile config "cert-expiration-509508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:55:41.398411   62888 config.go:182] Loaded profile config "cert-options-437743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:55:41.398542   62888 config.go:182] Loaded profile config "kubernetes-upgrade-935578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:55:41.398712   62888 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:55:41.435875   62888 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 19:55:41.437142   62888 start.go:297] selected driver: kvm2
	I0429 19:55:41.437157   62888 start.go:901] validating driver "kvm2" against <nil>
	I0429 19:55:41.437168   62888 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:55:41.437836   62888 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:55:41.437908   62888 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 19:55:41.454338   62888 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 19:55:41.454392   62888 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 19:55:41.454635   62888 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:55:41.454708   62888 cni.go:84] Creating CNI manager for ""
	I0429 19:55:41.454725   62888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:55:41.454736   62888 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 19:55:41.454811   62888 start.go:340] cluster config:
	{Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:55:41.454944   62888 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:55:41.457073   62888 out.go:177] * Starting "old-k8s-version-919612" primary control-plane node in "old-k8s-version-919612" cluster
	I0429 19:55:37.078511   62375 main.go:141] libmachine: (cert-options-437743) DBG | domain cert-options-437743 has defined MAC address 52:54:00:63:c6:1c in network mk-cert-options-437743
	I0429 19:55:37.079089   62375 main.go:141] libmachine: (cert-options-437743) DBG | unable to find current IP address of domain cert-options-437743 in network mk-cert-options-437743
	I0429 19:55:37.079132   62375 main.go:141] libmachine: (cert-options-437743) DBG | I0429 19:55:37.079038   62562 retry.go:31] will retry after 2.048629542s: waiting for machine to come up
	I0429 19:55:39.130386   62375 main.go:141] libmachine: (cert-options-437743) DBG | domain cert-options-437743 has defined MAC address 52:54:00:63:c6:1c in network mk-cert-options-437743
	I0429 19:55:39.131031   62375 main.go:141] libmachine: (cert-options-437743) DBG | unable to find current IP address of domain cert-options-437743 in network mk-cert-options-437743
	I0429 19:55:39.131052   62375 main.go:141] libmachine: (cert-options-437743) DBG | I0429 19:55:39.130960   62562 retry.go:31] will retry after 2.63950746s: waiting for machine to come up
	I0429 19:55:41.772074   62375 main.go:141] libmachine: (cert-options-437743) DBG | domain cert-options-437743 has defined MAC address 52:54:00:63:c6:1c in network mk-cert-options-437743
	I0429 19:55:41.772657   62375 main.go:141] libmachine: (cert-options-437743) DBG | unable to find current IP address of domain cert-options-437743 in network mk-cert-options-437743
	I0429 19:55:41.458508   62888 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 19:55:41.458564   62888 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0429 19:55:41.458579   62888 cache.go:56] Caching tarball of preloaded images
	I0429 19:55:41.458673   62888 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 19:55:41.458686   62888 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0429 19:55:41.458804   62888 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json ...
	I0429 19:55:41.458828   62888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json: {Name:mkdb2cecd76ba01739d27fb17a68ae70ffb28975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:55:41.458981   62888 start.go:360] acquireMachinesLock for old-k8s-version-919612: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:55:41.772701   62375 main.go:141] libmachine: (cert-options-437743) DBG | I0429 19:55:41.772635   62562 retry.go:31] will retry after 2.958952246s: waiting for machine to come up
	I0429 19:55:44.733265   62375 main.go:141] libmachine: (cert-options-437743) DBG | domain cert-options-437743 has defined MAC address 52:54:00:63:c6:1c in network mk-cert-options-437743
	I0429 19:55:44.733881   62375 main.go:141] libmachine: (cert-options-437743) DBG | unable to find current IP address of domain cert-options-437743 in network mk-cert-options-437743
	I0429 19:55:44.733891   62375 main.go:141] libmachine: (cert-options-437743) DBG | I0429 19:55:44.733803   62562 retry.go:31] will retry after 4.307543644s: waiting for machine to come up
	I0429 19:55:46.692119   61801 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 c15390ff5632c02e1365daf305c302470ea5c2bae15183161e5bdbb6bc21a80c ea5f22a3dc3a5095c7a9cbba2f9891a65b5d135a12b9f31adf32505da18e3b36 27d3d0f4c33ac012c5f184f8a89530e49694fd19185b190c7913eb383656679d 333f5702ea50929bc05d8ba4c88a3a36253ac06ae5608fc3d5bf7c861470e923 4e8894cce098444ad170ef8cb8b3d5b3051808cb9bbf47a6ed789962ef8763b4 320e179ed277e243d832c218c5d0ab961e48b8bffae10a4a39e9e1a6614b374d e1e22fb05258a12197b253781d61c3e71ea797856eb6bf0e44758f40dda236f1 47234ed27aca05e31cd0ba2548a24bf607287c87096129b7b3853515e75b3c59 8a0511aa3a40b68fb7ffe3d3c222064147d0c3dd1f953ea0f46c0b00e46debc3 c5ea3f908765530fc5be29ab50ebbc0ad4f5a0372198cda25187dc466c88f075: (14.445479668s)
	W0429 19:55:46.692191   61801 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 c15390ff5632c02e1365daf305c302470ea5c2bae15183161e5bdbb6bc21a80c ea5f22a3dc3a5095c7a9cbba2f9891a65b5d135a12b9f31adf32505da18e3b36 27d3d0f4c33ac012c5f184f8a89530e49694fd19185b190c7913eb383656679d 333f5702ea50929bc05d8ba4c88a3a36253ac06ae5608fc3d5bf7c861470e923 4e8894cce098444ad170ef8cb8b3d5b3051808cb9bbf47a6ed789962ef8763b4 320e179ed277e243d832c218c5d0ab961e48b8bffae10a4a39e9e1a6614b374d e1e22fb05258a12197b253781d61c3e71ea797856eb6bf0e44758f40dda236f1 47234ed27aca05e31cd0ba2548a24bf607287c87096129b7b3853515e75b3c59 8a0511aa3a40b68fb7ffe3d3c222064147d0c3dd1f953ea0f46c0b00e46debc3 c5ea3f908765530fc5be29ab50ebbc0ad4f5a0372198cda25187dc466c88f075: Process exited with status 1
	stdout:
	c15390ff5632c02e1365daf305c302470ea5c2bae15183161e5bdbb6bc21a80c
	ea5f22a3dc3a5095c7a9cbba2f9891a65b5d135a12b9f31adf32505da18e3b36
	27d3d0f4c33ac012c5f184f8a89530e49694fd19185b190c7913eb383656679d
	333f5702ea50929bc05d8ba4c88a3a36253ac06ae5608fc3d5bf7c861470e923
	4e8894cce098444ad170ef8cb8b3d5b3051808cb9bbf47a6ed789962ef8763b4
	320e179ed277e243d832c218c5d0ab961e48b8bffae10a4a39e9e1a6614b374d
	e1e22fb05258a12197b253781d61c3e71ea797856eb6bf0e44758f40dda236f1
	47234ed27aca05e31cd0ba2548a24bf607287c87096129b7b3853515e75b3c59
	8a0511aa3a40b68fb7ffe3d3c222064147d0c3dd1f953ea0f46c0b00e46debc3
	
	stderr:
	E0429 19:55:46.684677    3883 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5ea3f908765530fc5be29ab50ebbc0ad4f5a0372198cda25187dc466c88f075\": container with ID starting with c5ea3f908765530fc5be29ab50ebbc0ad4f5a0372198cda25187dc466c88f075 not found: ID does not exist" containerID="c5ea3f908765530fc5be29ab50ebbc0ad4f5a0372198cda25187dc466c88f075"
	time="2024-04-29T19:55:46Z" level=fatal msg="stopping the container \"c5ea3f908765530fc5be29ab50ebbc0ad4f5a0372198cda25187dc466c88f075\": rpc error: code = NotFound desc = could not find container \"c5ea3f908765530fc5be29ab50ebbc0ad4f5a0372198cda25187dc466c88f075\": container with ID starting with c5ea3f908765530fc5be29ab50ebbc0ad4f5a0372198cda25187dc466c88f075 not found: ID does not exist"
	I0429 19:55:46.692268   61801 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 19:55:46.737575   61801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 19:55:46.749389   61801 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Apr 29 19:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Apr 29 19:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Apr 29 19:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Apr 29 19:54 /etc/kubernetes/scheduler.conf
	
	I0429 19:55:46.749463   61801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 19:55:46.759940   61801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 19:55:46.770910   61801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 19:55:46.783036   61801 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:55:46.783092   61801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 19:55:46.794986   61801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 19:55:46.807067   61801 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:55:46.807125   61801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 19:55:46.819595   61801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 19:55:46.831483   61801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:55:46.889327   61801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:55:48.212841   61801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.32347254s)
	I0429 19:55:48.212874   61801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:55:48.468119   61801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:55:48.562631   61801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:55:48.672557   61801 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:55:48.672653   61801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:55:49.173629   61801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:55:49.673534   61801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:55:49.691177   61801 api_server.go:72] duration metric: took 1.018608351s to wait for apiserver process to appear ...
	I0429 19:55:49.691208   61801 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:55:49.691232   61801 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0429 19:55:49.043495   62375 main.go:141] libmachine: (cert-options-437743) DBG | domain cert-options-437743 has defined MAC address 52:54:00:63:c6:1c in network mk-cert-options-437743
	I0429 19:55:49.043877   62375 main.go:141] libmachine: (cert-options-437743) DBG | unable to find current IP address of domain cert-options-437743 in network mk-cert-options-437743
	I0429 19:55:49.043897   62375 main.go:141] libmachine: (cert-options-437743) DBG | I0429 19:55:49.043848   62562 retry.go:31] will retry after 4.835228307s: waiting for machine to come up
	I0429 19:55:51.977952   61801 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 19:55:51.977981   61801 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 19:55:51.977993   61801 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0429 19:55:52.017054   61801 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 19:55:52.017090   61801 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 19:55:52.191488   61801 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0429 19:55:52.195873   61801 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:55:52.195904   61801 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:55:52.691441   61801 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0429 19:55:52.696838   61801 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:55:52.696874   61801 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:55:53.191403   61801 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0429 19:55:53.196481   61801 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:55:53.196514   61801 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:55:53.691386   61801 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0429 19:55:53.695814   61801 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0429 19:55:53.702379   61801 api_server.go:141] control plane version: v1.30.0
	I0429 19:55:53.702413   61801 api_server.go:131] duration metric: took 4.011196334s to wait for apiserver health ...
	I0429 19:55:53.702426   61801 cni.go:84] Creating CNI manager for ""
	I0429 19:55:53.702434   61801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:55:53.704333   61801 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 19:55:53.705827   61801 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 19:55:53.718074   61801 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 19:55:53.737322   61801 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:55:53.747488   61801 system_pods.go:59] 8 kube-system pods found
	I0429 19:55:53.747521   61801 system_pods.go:61] "coredns-7db6d8ff4d-fpq6t" [23af4036-e6f8-469a-a3c3-1993d263455e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 19:55:53.747530   61801 system_pods.go:61] "coredns-7db6d8ff4d-qdl7m" [3f9e7b8e-40f8-49ea-8f88-21d0489f0908] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 19:55:53.747536   61801 system_pods.go:61] "etcd-kubernetes-upgrade-935578" [511cc4ac-bfd4-46a4-8c48-0b954852cfa7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 19:55:53.747543   61801 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-935578" [55adb3f1-47d5-4782-95d8-990208eb5cb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 19:55:53.747550   61801 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-935578" [114e8485-0cd6-4134-a700-ff8320706e46] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 19:55:53.747554   61801 system_pods.go:61] "kube-proxy-7kztm" [171130d7-c725-4c93-8fc1-2993b7d44621] Running
	I0429 19:55:53.747559   61801 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-935578" [12686abc-e102-4156-8a6e-42dc9312b332] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 19:55:53.747566   61801 system_pods.go:61] "storage-provisioner" [1278f3a6-bb55-4dac-9289-f4c9d462e19e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 19:55:53.747574   61801 system_pods.go:74] duration metric: took 10.223941ms to wait for pod list to return data ...
	I0429 19:55:53.747583   61801 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:55:53.751007   61801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:55:53.751031   61801 node_conditions.go:123] node cpu capacity is 2
	I0429 19:55:53.751039   61801 node_conditions.go:105] duration metric: took 3.451084ms to run NodePressure ...
	I0429 19:55:53.751054   61801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:55:54.082577   61801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 19:55:54.099870   61801 ops.go:34] apiserver oom_adj: -16
	I0429 19:55:54.099893   61801 kubeadm.go:591] duration metric: took 21.939800595s to restartPrimaryControlPlane
	I0429 19:55:54.099908   61801 kubeadm.go:393] duration metric: took 22.10411691s to StartCluster
	I0429 19:55:54.099928   61801 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:55:54.100000   61801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:55:54.101008   61801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:55:54.101246   61801 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:55:54.104096   61801 out.go:177] * Verifying Kubernetes components...
	I0429 19:55:54.101311   61801 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 19:55:54.104146   61801 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-935578"
	I0429 19:55:54.104181   61801 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-935578"
	I0429 19:55:54.101476   61801 config.go:182] Loaded profile config "kubernetes-upgrade-935578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:55:54.104204   61801 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-935578"
	I0429 19:55:54.105405   61801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-935578"
	I0429 19:55:54.105410   61801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0429 19:55:54.104224   61801 addons.go:243] addon storage-provisioner should already be in state true
	I0429 19:55:54.105516   61801 host.go:66] Checking if "kubernetes-upgrade-935578" exists ...
	I0429 19:55:54.105811   61801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:55:54.105841   61801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:55:54.105820   61801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:55:54.105951   61801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:55:54.127367   61801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34415
	I0429 19:55:54.127377   61801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I0429 19:55:54.127851   61801 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:55:54.127983   61801 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:55:54.128389   61801 main.go:141] libmachine: Using API Version  1
	I0429 19:55:54.128414   61801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:55:54.128481   61801 main.go:141] libmachine: Using API Version  1
	I0429 19:55:54.128511   61801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:55:54.128791   61801 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:55:54.128909   61801 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:55:54.129077   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetState
	I0429 19:55:54.129472   61801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:55:54.129525   61801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:55:54.131732   61801 kapi.go:59] client config for kubernetes-upgrade-935578: &rest.Config{Host:"https://192.168.39.125:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/client.crt", KeyFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/client.key", CAFile:"/home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 19:55:54.132136   61801 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-935578"
	W0429 19:55:54.132155   61801 addons.go:243] addon default-storageclass should already be in state true
	I0429 19:55:54.132185   61801 host.go:66] Checking if "kubernetes-upgrade-935578" exists ...
	I0429 19:55:54.132535   61801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:55:54.132579   61801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:55:54.149314   61801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35495
	I0429 19:55:54.149462   61801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36059
	I0429 19:55:54.149896   61801 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:55:54.150007   61801 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:55:54.150429   61801 main.go:141] libmachine: Using API Version  1
	I0429 19:55:54.150448   61801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:55:54.150561   61801 main.go:141] libmachine: Using API Version  1
	I0429 19:55:54.150585   61801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:55:54.150815   61801 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:55:54.150904   61801 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:55:54.151058   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetState
	I0429 19:55:54.151386   61801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:55:54.151418   61801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:55:54.152963   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:55:54.155044   61801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:55:54.156291   61801 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 19:55:54.156310   61801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 19:55:54.156335   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:55:54.160004   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:55:54.160390   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:54:07 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:55:54.160444   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:55:54.160695   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHPort
	I0429 19:55:54.160871   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:55:54.161002   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHUsername
	I0429 19:55:54.161237   61801 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578/id_rsa Username:docker}
	I0429 19:55:54.168217   61801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45997
	I0429 19:55:54.168619   61801 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:55:54.169090   61801 main.go:141] libmachine: Using API Version  1
	I0429 19:55:54.169105   61801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:55:54.169550   61801 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:55:54.169740   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetState
	I0429 19:55:54.171610   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:55:54.171849   61801 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 19:55:54.171866   61801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 19:55:54.171879   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHHostname
	I0429 19:55:54.174603   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:55:54.174989   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:1f:ba", ip: ""} in network mk-kubernetes-upgrade-935578: {Iface:virbr1 ExpiryTime:2024-04-29 20:54:07 +0000 UTC Type:0 Mac:52:54:00:8a:1f:ba Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:kubernetes-upgrade-935578 Clientid:01:52:54:00:8a:1f:ba}
	I0429 19:55:54.175010   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | domain kubernetes-upgrade-935578 has defined IP address 192.168.39.125 and MAC address 52:54:00:8a:1f:ba in network mk-kubernetes-upgrade-935578
	I0429 19:55:54.175170   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHPort
	I0429 19:55:54.175336   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHKeyPath
	I0429 19:55:54.175477   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .GetSSHUsername
	I0429 19:55:54.175614   61801 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/kubernetes-upgrade-935578/id_rsa Username:docker}
	I0429 19:55:54.325255   61801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:55:54.349051   61801 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:55:54.349140   61801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:55:54.368323   61801 api_server.go:72] duration metric: took 267.03052ms to wait for apiserver process to appear ...
	I0429 19:55:54.368352   61801 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:55:54.368377   61801 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0429 19:55:54.373903   61801 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0429 19:55:54.377509   61801 api_server.go:141] control plane version: v1.30.0
	I0429 19:55:54.377536   61801 api_server.go:131] duration metric: took 9.175996ms to wait for apiserver health ...
	I0429 19:55:54.377547   61801 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:55:54.384575   61801 system_pods.go:59] 8 kube-system pods found
	I0429 19:55:54.384618   61801 system_pods.go:61] "coredns-7db6d8ff4d-fpq6t" [23af4036-e6f8-469a-a3c3-1993d263455e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 19:55:54.384630   61801 system_pods.go:61] "coredns-7db6d8ff4d-qdl7m" [3f9e7b8e-40f8-49ea-8f88-21d0489f0908] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 19:55:54.384641   61801 system_pods.go:61] "etcd-kubernetes-upgrade-935578" [511cc4ac-bfd4-46a4-8c48-0b954852cfa7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 19:55:54.384656   61801 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-935578" [55adb3f1-47d5-4782-95d8-990208eb5cb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 19:55:54.384671   61801 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-935578" [114e8485-0cd6-4134-a700-ff8320706e46] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 19:55:54.384683   61801 system_pods.go:61] "kube-proxy-7kztm" [171130d7-c725-4c93-8fc1-2993b7d44621] Running
	I0429 19:55:54.384691   61801 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-935578" [12686abc-e102-4156-8a6e-42dc9312b332] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 19:55:54.384701   61801 system_pods.go:61] "storage-provisioner" [1278f3a6-bb55-4dac-9289-f4c9d462e19e] Running
	I0429 19:55:54.384709   61801 system_pods.go:74] duration metric: took 7.156881ms to wait for pod list to return data ...
	I0429 19:55:54.384726   61801 kubeadm.go:576] duration metric: took 283.444172ms to wait for: map[apiserver:true system_pods:true]
	I0429 19:55:54.384743   61801 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:55:54.389390   61801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:55:54.389411   61801 node_conditions.go:123] node cpu capacity is 2
	I0429 19:55:54.389419   61801 node_conditions.go:105] duration metric: took 4.670806ms to run NodePressure ...
	I0429 19:55:54.389430   61801 start.go:240] waiting for startup goroutines ...
	I0429 19:55:54.427498   61801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 19:55:54.430444   61801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 19:55:54.598500   61801 main.go:141] libmachine: Making call to close driver server
	I0429 19:55:54.598530   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .Close
	I0429 19:55:54.598842   61801 main.go:141] libmachine: Successfully made call to close driver server
	I0429 19:55:54.598864   61801 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 19:55:54.598863   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Closing plugin on server side
	I0429 19:55:54.598875   61801 main.go:141] libmachine: Making call to close driver server
	I0429 19:55:54.598884   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .Close
	I0429 19:55:54.599114   61801 main.go:141] libmachine: Successfully made call to close driver server
	I0429 19:55:54.599134   61801 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 19:55:54.599167   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Closing plugin on server side
	I0429 19:55:54.606207   61801 main.go:141] libmachine: Making call to close driver server
	I0429 19:55:54.606232   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .Close
	I0429 19:55:54.606553   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Closing plugin on server side
	I0429 19:55:54.606575   61801 main.go:141] libmachine: Successfully made call to close driver server
	I0429 19:55:54.606588   61801 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 19:55:55.289493   61801 main.go:141] libmachine: Making call to close driver server
	I0429 19:55:55.289519   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .Close
	I0429 19:55:55.289827   61801 main.go:141] libmachine: Successfully made call to close driver server
	I0429 19:55:55.289883   61801 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 19:55:55.289898   61801 main.go:141] libmachine: Making call to close driver server
	I0429 19:55:55.289918   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Closing plugin on server side
	I0429 19:55:55.289969   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .Close
	I0429 19:55:55.290263   61801 main.go:141] libmachine: Successfully made call to close driver server
	I0429 19:55:55.290316   61801 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 19:55:55.290283   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) DBG | Closing plugin on server side
	I0429 19:55:55.292176   61801 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0429 19:55:55.293345   61801 addons.go:505] duration metric: took 1.19203712s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0429 19:55:55.293375   61801 start.go:245] waiting for cluster config update ...
	I0429 19:55:55.293385   61801 start.go:254] writing updated cluster config ...
	I0429 19:55:55.293586   61801 ssh_runner.go:195] Run: rm -f paused
	I0429 19:55:55.346544   61801 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 19:55:55.348371   61801 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-935578" cluster and "default" namespace by default
	I0429 19:55:55.503707   62888 start.go:364] duration metric: took 14.044686977s to acquireMachinesLock for "old-k8s-version-919612"
	I0429 19:55:55.503783   62888 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:55:55.503938   62888 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.263897443Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420556263870967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df2bd869-b68d-41b2-bc20-59e492358777 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.264623493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c201d8d-c043-4938-bc31-4df1d8252006 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.264903249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c201d8d-c043-4938-bc31-4df1d8252006 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.266436967Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0da143744eb1b3ff803d56a5e967b649af7eb7f05abbc47b412a0bce876a370,PodSandboxId:362fa083037c3f953807e82a4755a78dbb88deceaf450cc7b34689c0a1f4badf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420552917777294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpq6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23af4036-e6f8-469a-a3c3-1993d263455e,},Annotations:map[string]string{io.kubernetes.container.hash: 368a5147,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881ac94dd527780c59077fff51bc26b56f18321de034179f1303ace799141bfc,PodSandboxId:2aeef7c758b918aa9b10b3d317bc1de0ea05ff036d23348249e3f4a3c88be229,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420552907789512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdl7m,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3f9e7b8e-40f8-49ea-8f88-21d0489f0908,},Annotations:map[string]string{io.kubernetes.container.hash: 3934606e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:005c0591e99df1af6bc8af6274fcf4482eed403142f29bad998c99fe17ad8a64,PodSandboxId:81735d9911dcffff9732c264de028510e7f19cadeb4442c1ac974b65cad0f29b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1714420552927331574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1278f3a6-bb55-4dac-9289-f4c9d462e19e,},Annotations:map[string]string{io.kubernetes.container.hash: 319e0e49,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e68158a2ba85b09899867ad6f775ce790cddf72b13a6ef82045ac1af5829e005,PodSandboxId:da6d737ef0818cb73ecc4a3bd161b897e1cb5c8b44da92b9775e2ef6003754ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNI
NG,CreatedAt:1714420549138579537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721fcc98776f18022edb7681ba1b8ef4,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ff2a6953826f465dabc4ce857644e77ffff2007b45aff1bf30ee0c11d3bc36,PodSandboxId:aaae5b260442f52603ec59f78cb9a1a7c15f5aa4efeeabe5e97cbc75cda5c67d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,Cr
eatedAt:1714420549153650167,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469db3ceb05169c39fe0959d4ba8d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5d55c7f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11cf93c6356b4bb95b008d5193b3a0a0149cebedc2741011139ab6cec4f98f79,PodSandboxId:fb098817557e31a391d97053c911b85cfcb45291911ec4eacf12b834593ddf5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNIN
G,CreatedAt:1714420549129736825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04ff6efa4066650fc5ff6edcdebfc64,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477966f69eb1bcd21f38c51f2121a948bf70db4464b199d6861737c604163516,PodSandboxId:24b944ee412648e79e36d71dbaa05ed7853ca42cd08318035d12c2d7a56dcca6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAIN
ER_RUNNING,CreatedAt:1714420545308560144,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7kztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 171130d7-c725-4c93-8fc1-2993b7d44621,},Annotations:map[string]string{io.kubernetes.container.hash: 956cd87,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5477895d56f2214caedba40f548c7566e924a0535b73e530a5efc3aa5ded970d,PodSandboxId:74473ef10629bd97551af630dd51bccb1a1fc68e9d7f339e9bf68e66ec82a4b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714420544305663005
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c43399e49801cd4077720b88f0ee353,},Annotations:map[string]string{io.kubernetes.container.hash: 57f95737,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c15390ff5632c02e1365daf305c302470ea5c2bae15183161e5bdbb6bc21a80c,PodSandboxId:362fa083037c3f953807e82a4755a78dbb88deceaf450cc7b34689c0a1f4badf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420531250834737,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpq6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23af4036-e6f8-469a-a3c3-1993d263455e,},Annotations:map[string]string{io.kubernetes.container.hash: 368a5147,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea5f22a3dc3a5095c7a9cbba2f9891a65b5d135a12b9f31adf32505da18e3b36,PodSandboxId:2aeef7c758b918aa9b10b3d317bc1de0ea05ff036d23348249e3f4a3c88be229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420531129826572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdl7m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9e7b8e-40f8-49ea-8f88-21d0489f0908,},Annotations:map[string]string{io.kubernetes.container.hash: 3934606e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d3d0f4c33ac012c5f184f8a89530e49694fd19185b190c7913eb383656679d,PodSandboxId:81735d9911dcffff9732c264de028510e7f19cadeb4442c
1ac974b65cad0f29b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714420530782655872,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1278f3a6-bb55-4dac-9289-f4c9d462e19e,},Annotations:map[string]string{io.kubernetes.container.hash: 319e0e49,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333f5702ea50929bc05d8ba4c88a3a36253ac06ae5608fc3d5bf7c861470e923,PodSandboxId:5c93df686f1fb052ec0c7733f16f4044f0293e16d5fdbc41e81e028bf898
83e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714420528289805608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04ff6efa4066650fc5ff6edcdebfc64,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e8894cce098444ad170ef8cb8b3d5b3051808cb9bbf47a6ed789962ef8763b4,PodSandboxId:258ba2a22b37688543b9d39336a9135e4dc9f69
666af40173e2298a68d939271,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714420528278208461,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c43399e49801cd4077720b88f0ee353,},Annotations:map[string]string{io.kubernetes.container.hash: 57f95737,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320e179ed277e243d832c218c5d0ab961e48b8bffae10a4a39e9e1a6614b374d,PodSandboxId:835ded34b3a8d17268249c821411df5a61d8f77fabefaa976d5d4a5ccac0564d,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714420528063736119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7kztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 171130d7-c725-4c93-8fc1-2993b7d44621,},Annotations:map[string]string{io.kubernetes.container.hash: 956cd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1e22fb05258a12197b253781d61c3e71ea797856eb6bf0e44758f40dda236f1,PodSandboxId:7429e78366d797b9055a37903119623738d959b8fa34d6876676374867e0d113,Metadata:&ContainerMetadata{Name:kube-apiserv
er,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714420527876716431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469db3ceb05169c39fe0959d4ba8d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5d55c7f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47234ed27aca05e31cd0ba2548a24bf607287c87096129b7b3853515e75b3c59,PodSandboxId:279193fcfad267d8400eaf522238aafc06c0b1632ca7909e2a0c04ece864f13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714420527655853066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721fcc98776f18022edb7681ba1b8ef4,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c201d8d-c043-4938-bc31-4df1d8252006 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.328743829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=944a2a0a-f46b-40d9-8e6d-fbc23d8534f8 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.329326252Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=944a2a0a-f46b-40d9-8e6d-fbc23d8534f8 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.333563425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=111524a1-5420-4082-b0c1-add4b0ede605 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.334814993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420556334774435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=111524a1-5420-4082-b0c1-add4b0ede605 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.335868902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=869b4d5e-4224-404b-ab01-14fd81271f09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.335982307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=869b4d5e-4224-404b-ab01-14fd81271f09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.337301998Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0da143744eb1b3ff803d56a5e967b649af7eb7f05abbc47b412a0bce876a370,PodSandboxId:362fa083037c3f953807e82a4755a78dbb88deceaf450cc7b34689c0a1f4badf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420552917777294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpq6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23af4036-e6f8-469a-a3c3-1993d263455e,},Annotations:map[string]string{io.kubernetes.container.hash: 368a5147,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881ac94dd527780c59077fff51bc26b56f18321de034179f1303ace799141bfc,PodSandboxId:2aeef7c758b918aa9b10b3d317bc1de0ea05ff036d23348249e3f4a3c88be229,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420552907789512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdl7m,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3f9e7b8e-40f8-49ea-8f88-21d0489f0908,},Annotations:map[string]string{io.kubernetes.container.hash: 3934606e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:005c0591e99df1af6bc8af6274fcf4482eed403142f29bad998c99fe17ad8a64,PodSandboxId:81735d9911dcffff9732c264de028510e7f19cadeb4442c1ac974b65cad0f29b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1714420552927331574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1278f3a6-bb55-4dac-9289-f4c9d462e19e,},Annotations:map[string]string{io.kubernetes.container.hash: 319e0e49,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e68158a2ba85b09899867ad6f775ce790cddf72b13a6ef82045ac1af5829e005,PodSandboxId:da6d737ef0818cb73ecc4a3bd161b897e1cb5c8b44da92b9775e2ef6003754ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNI
NG,CreatedAt:1714420549138579537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721fcc98776f18022edb7681ba1b8ef4,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ff2a6953826f465dabc4ce857644e77ffff2007b45aff1bf30ee0c11d3bc36,PodSandboxId:aaae5b260442f52603ec59f78cb9a1a7c15f5aa4efeeabe5e97cbc75cda5c67d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,Cr
eatedAt:1714420549153650167,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469db3ceb05169c39fe0959d4ba8d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5d55c7f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11cf93c6356b4bb95b008d5193b3a0a0149cebedc2741011139ab6cec4f98f79,PodSandboxId:fb098817557e31a391d97053c911b85cfcb45291911ec4eacf12b834593ddf5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNIN
G,CreatedAt:1714420549129736825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04ff6efa4066650fc5ff6edcdebfc64,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477966f69eb1bcd21f38c51f2121a948bf70db4464b199d6861737c604163516,PodSandboxId:24b944ee412648e79e36d71dbaa05ed7853ca42cd08318035d12c2d7a56dcca6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAIN
ER_RUNNING,CreatedAt:1714420545308560144,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7kztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 171130d7-c725-4c93-8fc1-2993b7d44621,},Annotations:map[string]string{io.kubernetes.container.hash: 956cd87,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5477895d56f2214caedba40f548c7566e924a0535b73e530a5efc3aa5ded970d,PodSandboxId:74473ef10629bd97551af630dd51bccb1a1fc68e9d7f339e9bf68e66ec82a4b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714420544305663005
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c43399e49801cd4077720b88f0ee353,},Annotations:map[string]string{io.kubernetes.container.hash: 57f95737,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c15390ff5632c02e1365daf305c302470ea5c2bae15183161e5bdbb6bc21a80c,PodSandboxId:362fa083037c3f953807e82a4755a78dbb88deceaf450cc7b34689c0a1f4badf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420531250834737,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpq6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23af4036-e6f8-469a-a3c3-1993d263455e,},Annotations:map[string]string{io.kubernetes.container.hash: 368a5147,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea5f22a3dc3a5095c7a9cbba2f9891a65b5d135a12b9f31adf32505da18e3b36,PodSandboxId:2aeef7c758b918aa9b10b3d317bc1de0ea05ff036d23348249e3f4a3c88be229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420531129826572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdl7m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9e7b8e-40f8-49ea-8f88-21d0489f0908,},Annotations:map[string]string{io.kubernetes.container.hash: 3934606e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d3d0f4c33ac012c5f184f8a89530e49694fd19185b190c7913eb383656679d,PodSandboxId:81735d9911dcffff9732c264de028510e7f19cadeb4442c
1ac974b65cad0f29b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714420530782655872,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1278f3a6-bb55-4dac-9289-f4c9d462e19e,},Annotations:map[string]string{io.kubernetes.container.hash: 319e0e49,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333f5702ea50929bc05d8ba4c88a3a36253ac06ae5608fc3d5bf7c861470e923,PodSandboxId:5c93df686f1fb052ec0c7733f16f4044f0293e16d5fdbc41e81e028bf898
83e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714420528289805608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04ff6efa4066650fc5ff6edcdebfc64,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e8894cce098444ad170ef8cb8b3d5b3051808cb9bbf47a6ed789962ef8763b4,PodSandboxId:258ba2a22b37688543b9d39336a9135e4dc9f69
666af40173e2298a68d939271,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714420528278208461,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c43399e49801cd4077720b88f0ee353,},Annotations:map[string]string{io.kubernetes.container.hash: 57f95737,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320e179ed277e243d832c218c5d0ab961e48b8bffae10a4a39e9e1a6614b374d,PodSandboxId:835ded34b3a8d17268249c821411df5a61d8f77fabefaa976d5d4a5ccac0564d,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714420528063736119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7kztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 171130d7-c725-4c93-8fc1-2993b7d44621,},Annotations:map[string]string{io.kubernetes.container.hash: 956cd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1e22fb05258a12197b253781d61c3e71ea797856eb6bf0e44758f40dda236f1,PodSandboxId:7429e78366d797b9055a37903119623738d959b8fa34d6876676374867e0d113,Metadata:&ContainerMetadata{Name:kube-apiserv
er,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714420527876716431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469db3ceb05169c39fe0959d4ba8d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5d55c7f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47234ed27aca05e31cd0ba2548a24bf607287c87096129b7b3853515e75b3c59,PodSandboxId:279193fcfad267d8400eaf522238aafc06c0b1632ca7909e2a0c04ece864f13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714420527655853066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721fcc98776f18022edb7681ba1b8ef4,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=869b4d5e-4224-404b-ab01-14fd81271f09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.406150309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3fb04cf2-f8ce-4bf8-9bf9-c0bc49f423c7 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.406248575Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3fb04cf2-f8ce-4bf8-9bf9-c0bc49f423c7 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.408001462Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94edb977-0c2e-4302-9e41-e40893017550 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.409413081Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420556409380597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94edb977-0c2e-4302-9e41-e40893017550 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.410509990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=295a724a-8bac-4e0b-abd1-79b400e9e1b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.410593448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=295a724a-8bac-4e0b-abd1-79b400e9e1b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.410933444Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0da143744eb1b3ff803d56a5e967b649af7eb7f05abbc47b412a0bce876a370,PodSandboxId:362fa083037c3f953807e82a4755a78dbb88deceaf450cc7b34689c0a1f4badf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420552917777294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpq6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23af4036-e6f8-469a-a3c3-1993d263455e,},Annotations:map[string]string{io.kubernetes.container.hash: 368a5147,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881ac94dd527780c59077fff51bc26b56f18321de034179f1303ace799141bfc,PodSandboxId:2aeef7c758b918aa9b10b3d317bc1de0ea05ff036d23348249e3f4a3c88be229,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420552907789512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdl7m,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3f9e7b8e-40f8-49ea-8f88-21d0489f0908,},Annotations:map[string]string{io.kubernetes.container.hash: 3934606e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:005c0591e99df1af6bc8af6274fcf4482eed403142f29bad998c99fe17ad8a64,PodSandboxId:81735d9911dcffff9732c264de028510e7f19cadeb4442c1ac974b65cad0f29b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1714420552927331574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1278f3a6-bb55-4dac-9289-f4c9d462e19e,},Annotations:map[string]string{io.kubernetes.container.hash: 319e0e49,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e68158a2ba85b09899867ad6f775ce790cddf72b13a6ef82045ac1af5829e005,PodSandboxId:da6d737ef0818cb73ecc4a3bd161b897e1cb5c8b44da92b9775e2ef6003754ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNI
NG,CreatedAt:1714420549138579537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721fcc98776f18022edb7681ba1b8ef4,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ff2a6953826f465dabc4ce857644e77ffff2007b45aff1bf30ee0c11d3bc36,PodSandboxId:aaae5b260442f52603ec59f78cb9a1a7c15f5aa4efeeabe5e97cbc75cda5c67d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,Cr
eatedAt:1714420549153650167,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469db3ceb05169c39fe0959d4ba8d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5d55c7f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11cf93c6356b4bb95b008d5193b3a0a0149cebedc2741011139ab6cec4f98f79,PodSandboxId:fb098817557e31a391d97053c911b85cfcb45291911ec4eacf12b834593ddf5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNIN
G,CreatedAt:1714420549129736825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04ff6efa4066650fc5ff6edcdebfc64,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477966f69eb1bcd21f38c51f2121a948bf70db4464b199d6861737c604163516,PodSandboxId:24b944ee412648e79e36d71dbaa05ed7853ca42cd08318035d12c2d7a56dcca6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAIN
ER_RUNNING,CreatedAt:1714420545308560144,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7kztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 171130d7-c725-4c93-8fc1-2993b7d44621,},Annotations:map[string]string{io.kubernetes.container.hash: 956cd87,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5477895d56f2214caedba40f548c7566e924a0535b73e530a5efc3aa5ded970d,PodSandboxId:74473ef10629bd97551af630dd51bccb1a1fc68e9d7f339e9bf68e66ec82a4b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714420544305663005
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c43399e49801cd4077720b88f0ee353,},Annotations:map[string]string{io.kubernetes.container.hash: 57f95737,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c15390ff5632c02e1365daf305c302470ea5c2bae15183161e5bdbb6bc21a80c,PodSandboxId:362fa083037c3f953807e82a4755a78dbb88deceaf450cc7b34689c0a1f4badf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420531250834737,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpq6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23af4036-e6f8-469a-a3c3-1993d263455e,},Annotations:map[string]string{io.kubernetes.container.hash: 368a5147,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea5f22a3dc3a5095c7a9cbba2f9891a65b5d135a12b9f31adf32505da18e3b36,PodSandboxId:2aeef7c758b918aa9b10b3d317bc1de0ea05ff036d23348249e3f4a3c88be229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420531129826572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdl7m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9e7b8e-40f8-49ea-8f88-21d0489f0908,},Annotations:map[string]string{io.kubernetes.container.hash: 3934606e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d3d0f4c33ac012c5f184f8a89530e49694fd19185b190c7913eb383656679d,PodSandboxId:81735d9911dcffff9732c264de028510e7f19cadeb4442c
1ac974b65cad0f29b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714420530782655872,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1278f3a6-bb55-4dac-9289-f4c9d462e19e,},Annotations:map[string]string{io.kubernetes.container.hash: 319e0e49,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333f5702ea50929bc05d8ba4c88a3a36253ac06ae5608fc3d5bf7c861470e923,PodSandboxId:5c93df686f1fb052ec0c7733f16f4044f0293e16d5fdbc41e81e028bf898
83e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714420528289805608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04ff6efa4066650fc5ff6edcdebfc64,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e8894cce098444ad170ef8cb8b3d5b3051808cb9bbf47a6ed789962ef8763b4,PodSandboxId:258ba2a22b37688543b9d39336a9135e4dc9f69
666af40173e2298a68d939271,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714420528278208461,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c43399e49801cd4077720b88f0ee353,},Annotations:map[string]string{io.kubernetes.container.hash: 57f95737,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320e179ed277e243d832c218c5d0ab961e48b8bffae10a4a39e9e1a6614b374d,PodSandboxId:835ded34b3a8d17268249c821411df5a61d8f77fabefaa976d5d4a5ccac0564d,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714420528063736119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7kztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 171130d7-c725-4c93-8fc1-2993b7d44621,},Annotations:map[string]string{io.kubernetes.container.hash: 956cd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1e22fb05258a12197b253781d61c3e71ea797856eb6bf0e44758f40dda236f1,PodSandboxId:7429e78366d797b9055a37903119623738d959b8fa34d6876676374867e0d113,Metadata:&ContainerMetadata{Name:kube-apiserv
er,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714420527876716431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469db3ceb05169c39fe0959d4ba8d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5d55c7f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47234ed27aca05e31cd0ba2548a24bf607287c87096129b7b3853515e75b3c59,PodSandboxId:279193fcfad267d8400eaf522238aafc06c0b1632ca7909e2a0c04ece864f13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714420527655853066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721fcc98776f18022edb7681ba1b8ef4,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=295a724a-8bac-4e0b-abd1-79b400e9e1b3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.460434099Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13980e71-26b6-4cc0-b3a8-5e8de40e4ecc name=/runtime.v1.RuntimeService/Version
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.460539429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13980e71-26b6-4cc0-b3a8-5e8de40e4ecc name=/runtime.v1.RuntimeService/Version
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.462755660Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae364fcc-e268-4dae-a919-28689853a50c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.463621090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420556463576749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae364fcc-e268-4dae-a919-28689853a50c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.465408533Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f31275c-0426-47c0-8481-66b21b297198 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.465503581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f31275c-0426-47c0-8481-66b21b297198 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:56 kubernetes-upgrade-935578 crio[3023]: time="2024-04-29 19:55:56.466001814Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0da143744eb1b3ff803d56a5e967b649af7eb7f05abbc47b412a0bce876a370,PodSandboxId:362fa083037c3f953807e82a4755a78dbb88deceaf450cc7b34689c0a1f4badf,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420552917777294,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpq6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23af4036-e6f8-469a-a3c3-1993d263455e,},Annotations:map[string]string{io.kubernetes.container.hash: 368a5147,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881ac94dd527780c59077fff51bc26b56f18321de034179f1303ace799141bfc,PodSandboxId:2aeef7c758b918aa9b10b3d317bc1de0ea05ff036d23348249e3f4a3c88be229,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420552907789512,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdl7m,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 3f9e7b8e-40f8-49ea-8f88-21d0489f0908,},Annotations:map[string]string{io.kubernetes.container.hash: 3934606e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:005c0591e99df1af6bc8af6274fcf4482eed403142f29bad998c99fe17ad8a64,PodSandboxId:81735d9911dcffff9732c264de028510e7f19cadeb4442c1ac974b65cad0f29b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1714420552927331574,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1278f3a6-bb55-4dac-9289-f4c9d462e19e,},Annotations:map[string]string{io.kubernetes.container.hash: 319e0e49,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e68158a2ba85b09899867ad6f775ce790cddf72b13a6ef82045ac1af5829e005,PodSandboxId:da6d737ef0818cb73ecc4a3bd161b897e1cb5c8b44da92b9775e2ef6003754ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNI
NG,CreatedAt:1714420549138579537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721fcc98776f18022edb7681ba1b8ef4,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ff2a6953826f465dabc4ce857644e77ffff2007b45aff1bf30ee0c11d3bc36,PodSandboxId:aaae5b260442f52603ec59f78cb9a1a7c15f5aa4efeeabe5e97cbc75cda5c67d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,Cr
eatedAt:1714420549153650167,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469db3ceb05169c39fe0959d4ba8d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5d55c7f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11cf93c6356b4bb95b008d5193b3a0a0149cebedc2741011139ab6cec4f98f79,PodSandboxId:fb098817557e31a391d97053c911b85cfcb45291911ec4eacf12b834593ddf5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNIN
G,CreatedAt:1714420549129736825,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04ff6efa4066650fc5ff6edcdebfc64,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477966f69eb1bcd21f38c51f2121a948bf70db4464b199d6861737c604163516,PodSandboxId:24b944ee412648e79e36d71dbaa05ed7853ca42cd08318035d12c2d7a56dcca6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAIN
ER_RUNNING,CreatedAt:1714420545308560144,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7kztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 171130d7-c725-4c93-8fc1-2993b7d44621,},Annotations:map[string]string{io.kubernetes.container.hash: 956cd87,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5477895d56f2214caedba40f548c7566e924a0535b73e530a5efc3aa5ded970d,PodSandboxId:74473ef10629bd97551af630dd51bccb1a1fc68e9d7f339e9bf68e66ec82a4b5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714420544305663005
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c43399e49801cd4077720b88f0ee353,},Annotations:map[string]string{io.kubernetes.container.hash: 57f95737,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c15390ff5632c02e1365daf305c302470ea5c2bae15183161e5bdbb6bc21a80c,PodSandboxId:362fa083037c3f953807e82a4755a78dbb88deceaf450cc7b34689c0a1f4badf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420531250834737,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fpq6t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23af4036-e6f8-469a-a3c3-1993d263455e,},Annotations:map[string]string{io.kubernetes.container.hash: 368a5147,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea5f22a3dc3a5095c7a9cbba2f9891a65b5d135a12b9f31adf32505da18e3b36,PodSandboxId:2aeef7c758b918aa9b10b3d317bc1de0ea05ff036d23348249e3f4a3c88be229,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420531129826572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qdl7m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f9e7b8e-40f8-49ea-8f88-21d0489f0908,},Annotations:map[string]string{io.kubernetes.container.hash: 3934606e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27d3d0f4c33ac012c5f184f8a89530e49694fd19185b190c7913eb383656679d,PodSandboxId:81735d9911dcffff9732c264de028510e7f19cadeb4442c
1ac974b65cad0f29b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714420530782655872,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1278f3a6-bb55-4dac-9289-f4c9d462e19e,},Annotations:map[string]string{io.kubernetes.container.hash: 319e0e49,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:333f5702ea50929bc05d8ba4c88a3a36253ac06ae5608fc3d5bf7c861470e923,PodSandboxId:5c93df686f1fb052ec0c7733f16f4044f0293e16d5fdbc41e81e028bf898
83e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714420528289805608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b04ff6efa4066650fc5ff6edcdebfc64,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e8894cce098444ad170ef8cb8b3d5b3051808cb9bbf47a6ed789962ef8763b4,PodSandboxId:258ba2a22b37688543b9d39336a9135e4dc9f69
666af40173e2298a68d939271,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714420528278208461,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c43399e49801cd4077720b88f0ee353,},Annotations:map[string]string{io.kubernetes.container.hash: 57f95737,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320e179ed277e243d832c218c5d0ab961e48b8bffae10a4a39e9e1a6614b374d,PodSandboxId:835ded34b3a8d17268249c821411df5a61d8f77fabefaa976d5d4a5ccac0564d,Metadata:&
ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714420528063736119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7kztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 171130d7-c725-4c93-8fc1-2993b7d44621,},Annotations:map[string]string{io.kubernetes.container.hash: 956cd87,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1e22fb05258a12197b253781d61c3e71ea797856eb6bf0e44758f40dda236f1,PodSandboxId:7429e78366d797b9055a37903119623738d959b8fa34d6876676374867e0d113,Metadata:&ContainerMetadata{Name:kube-apiserv
er,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714420527876716431,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 469db3ceb05169c39fe0959d4ba8d4a1,},Annotations:map[string]string{io.kubernetes.container.hash: 5d55c7f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47234ed27aca05e31cd0ba2548a24bf607287c87096129b7b3853515e75b3c59,PodSandboxId:279193fcfad267d8400eaf522238aafc06c0b1632ca7909e2a0c04ece864f13b,Metadata:&ContainerMetadata{Name:kube-scheduler,Att
empt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714420527655853066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-935578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 721fcc98776f18022edb7681ba1b8ef4,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f31275c-0426-47c0-8481-66b21b297198 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	005c0591e99df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   81735d9911dcf       storage-provisioner
	d0da143744eb1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   362fa083037c3       coredns-7db6d8ff4d-fpq6t
	881ac94dd5277       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   2aeef7c758b91       coredns-7db6d8ff4d-qdl7m
	c0ff2a6953826       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   7 seconds ago       Running             kube-apiserver            2                   aaae5b260442f       kube-apiserver-kubernetes-upgrade-935578
	e68158a2ba85b       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   7 seconds ago       Running             kube-scheduler            2                   da6d737ef0818       kube-scheduler-kubernetes-upgrade-935578
	11cf93c6356b4       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   7 seconds ago       Running             kube-controller-manager   2                   fb098817557e3       kube-controller-manager-kubernetes-upgrade-935578
	477966f69eb1b       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   11 seconds ago      Running             kube-proxy                2                   24b944ee41264       kube-proxy-7kztm
	5477895d56f22       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   12 seconds ago      Running             etcd                      2                   74473ef10629b       etcd-kubernetes-upgrade-935578
	c15390ff5632c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   25 seconds ago      Exited              coredns                   1                   362fa083037c3       coredns-7db6d8ff4d-fpq6t
	ea5f22a3dc3a5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   25 seconds ago      Exited              coredns                   1                   2aeef7c758b91       coredns-7db6d8ff4d-qdl7m
	27d3d0f4c33ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   25 seconds ago      Exited              storage-provisioner       2                   81735d9911dcf       storage-provisioner
	333f5702ea509       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   28 seconds ago      Exited              kube-controller-manager   1                   5c93df686f1fb       kube-controller-manager-kubernetes-upgrade-935578
	4e8894cce0984       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   28 seconds ago      Exited              etcd                      1                   258ba2a22b376       etcd-kubernetes-upgrade-935578
	320e179ed277e       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   28 seconds ago      Exited              kube-proxy                1                   835ded34b3a8d       kube-proxy-7kztm
	e1e22fb05258a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   28 seconds ago      Exited              kube-apiserver            1                   7429e78366d79       kube-apiserver-kubernetes-upgrade-935578
	47234ed27aca0       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   28 seconds ago      Exited              kube-scheduler            1                   279193fcfad26       kube-scheduler-kubernetes-upgrade-935578
	
	
	==> coredns [881ac94dd527780c59077fff51bc26b56f18321de034179f1303ace799141bfc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c15390ff5632c02e1365daf305c302470ea5c2bae15183161e5bdbb6bc21a80c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d0da143744eb1b3ff803d56a5e967b649af7eb7f05abbc47b412a0bce876a370] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ea5f22a3dc3a5095c7a9cbba2f9891a65b5d135a12b9f31adf32505da18e3b36] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-935578
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-935578
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:54:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-935578
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:55:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:55:52 +0000   Mon, 29 Apr 2024 19:54:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:55:52 +0000   Mon, 29 Apr 2024 19:54:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:55:52 +0000   Mon, 29 Apr 2024 19:54:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:55:52 +0000   Mon, 29 Apr 2024 19:54:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    kubernetes-upgrade-935578
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0f10e4a214e45dd85da2a3a7710aa05
	  System UUID:                f0f10e4a-214e-45dd-85da-2a3a7710aa05
	  Boot ID:                    ae84b870-a9a6-402f-9e9a-84ed383b8f91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fpq6t                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     69s
	  kube-system                 coredns-7db6d8ff4d-qdl7m                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     69s
	  kube-system                 etcd-kubernetes-upgrade-935578                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                 kube-apiserver-kubernetes-upgrade-935578             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-935578    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-7kztm                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-kubernetes-upgrade-935578             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 68s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  89s (x8 over 89s)  kubelet          Node kubernetes-upgrade-935578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s (x8 over 89s)  kubelet          Node kubernetes-upgrade-935578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s (x7 over 89s)  kubelet          Node kubernetes-upgrade-935578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           70s                node-controller  Node kubernetes-upgrade-935578 event: Registered Node kubernetes-upgrade-935578 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-935578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-935578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-935578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.314948] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.060063] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067224] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.222608] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.160419] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.345198] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +4.823527] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +0.062948] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.876217] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +8.540823] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	[  +0.085491] kauditd_printk_skb: 97 callbacks suppressed
	[ +12.388091] kauditd_printk_skb: 21 callbacks suppressed
	[Apr29 19:55] kauditd_printk_skb: 79 callbacks suppressed
	[  +7.743062] systemd-fstab-generator[2250]: Ignoring "noauto" option for root device
	[  +0.181251] systemd-fstab-generator[2262]: Ignoring "noauto" option for root device
	[  +0.350403] systemd-fstab-generator[2344]: Ignoring "noauto" option for root device
	[  +0.226053] systemd-fstab-generator[2381]: Ignoring "noauto" option for root device
	[  +1.101061] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +2.959274] systemd-fstab-generator[3728]: Ignoring "noauto" option for root device
	[  +0.887385] kauditd_printk_skb: 300 callbacks suppressed
	[ +16.228669] systemd-fstab-generator[4049]: Ignoring "noauto" option for root device
	[  +0.095512] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.115241] kauditd_printk_skb: 54 callbacks suppressed
	[  +0.621499] systemd-fstab-generator[4512]: Ignoring "noauto" option for root device
	
	
	==> etcd [4e8894cce098444ad170ef8cb8b3d5b3051808cb9bbf47a6ed789962ef8763b4] <==
	
	
	==> etcd [5477895d56f2214caedba40f548c7566e924a0535b73e530a5efc3aa5ded970d] <==
	{"level":"info","ts":"2024-04-29T19:55:44.472028Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:55:44.472158Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:55:44.472553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c switched to configuration voters=(17641705551115235980)"}
	{"level":"info","ts":"2024-04-29T19:55:44.472729Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","added-peer-id":"f4d3edba9e42b28c","added-peer-peer-urls":["https://192.168.39.125:2380"]}
	{"level":"info","ts":"2024-04-29T19:55:44.472928Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:55:44.47301Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:55:44.475266Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T19:55:44.475655Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f4d3edba9e42b28c","initial-advertise-peer-urls":["https://192.168.39.125:2380"],"listen-peer-urls":["https://192.168.39.125:2380"],"advertise-client-urls":["https://192.168.39.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:55:44.475834Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T19:55:44.476005Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2024-04-29T19:55:44.476113Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2024-04-29T19:55:45.759281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T19:55:45.759357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T19:55:45.759399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgPreVoteResp from f4d3edba9e42b28c at term 2"}
	{"level":"info","ts":"2024-04-29T19:55:45.759412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T19:55:45.759417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgVoteResp from f4d3edba9e42b28c at term 3"}
	{"level":"info","ts":"2024-04-29T19:55:45.759425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became leader at term 3"}
	{"level":"info","ts":"2024-04-29T19:55:45.759433Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4d3edba9e42b28c elected leader f4d3edba9e42b28c at term 3"}
	{"level":"info","ts":"2024-04-29T19:55:45.761224Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:kubernetes-upgrade-935578 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:55:45.761239Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:55:45.761445Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T19:55:45.761482Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T19:55:45.761395Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:55:45.763684Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T19:55:45.763688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.125:2379"}
	
	
	==> kernel <==
	 19:55:57 up 1 min,  0 users,  load average: 1.64, 0.68, 0.25
	Linux kubernetes-upgrade-935578 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c0ff2a6953826f465dabc4ce857644e77ffff2007b45aff1bf30ee0c11d3bc36] <==
	I0429 19:55:51.970713       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0429 19:55:52.026969       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 19:55:52.027165       1 policy_source.go:224] refreshing policies
	I0429 19:55:52.037849       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 19:55:52.040304       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 19:55:52.044881       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 19:55:52.046592       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 19:55:52.046633       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 19:55:52.060331       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 19:55:52.070788       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 19:55:52.075903       1 aggregator.go:165] initial CRD sync complete...
	I0429 19:55:52.075983       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 19:55:52.076008       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 19:55:52.076032       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:55:52.101928       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:55:52.120872       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 19:55:52.130650       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0429 19:55:52.155338       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0429 19:55:52.951609       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 19:55:53.900408       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 19:55:53.922855       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 19:55:53.971999       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 19:55:54.044505       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 19:55:54.064364       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 19:55:55.162980       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [e1e22fb05258a12197b253781d61c3e71ea797856eb6bf0e44758f40dda236f1] <==
	
	
	==> kube-controller-manager [11cf93c6356b4bb95b008d5193b3a0a0149cebedc2741011139ab6cec4f98f79] <==
	I0429 19:55:54.042202       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0429 19:55:54.053004       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0429 19:55:54.053302       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0429 19:55:54.053529       1 shared_informer.go:313] Waiting for caches to sync for job
	I0429 19:55:54.058408       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0429 19:55:54.059430       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0429 19:55:54.061712       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0429 19:55:54.063797       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0429 19:55:54.064281       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0429 19:55:54.065540       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0429 19:55:54.065726       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0429 19:55:54.067381       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0429 19:55:54.066666       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0429 19:55:54.067256       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0429 19:55:54.067313       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0429 19:55:54.069620       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0429 19:55:54.067328       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0429 19:55:54.067355       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0429 19:55:54.067347       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0429 19:55:54.071571       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0429 19:55:54.067363       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0429 19:55:54.067274       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0429 19:55:54.071963       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0429 19:55:54.071973       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0429 19:55:54.103014       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-controller-manager [333f5702ea50929bc05d8ba4c88a3a36253ac06ae5608fc3d5bf7c861470e923] <==
	
	
	==> kube-proxy [320e179ed277e243d832c218c5d0ab961e48b8bffae10a4a39e9e1a6614b374d] <==
	I0429 19:55:28.959435       1 server_linux.go:69] "Using iptables proxy"
	E0429 19:55:28.968667       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-935578\": dial tcp 192.168.39.125:8443: connect: connection refused"
	
	
	==> kube-proxy [477966f69eb1bcd21f38c51f2121a948bf70db4464b199d6861737c604163516] <==
	I0429 19:55:45.433674       1 server_linux.go:69] "Using iptables proxy"
	E0429 19:55:45.440797       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-935578\": dial tcp 192.168.39.125:8443: connect: connection refused"
	E0429 19:55:46.614132       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-935578\": dial tcp 192.168.39.125:8443: connect: connection refused"
	E0429 19:55:48.837575       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-935578\": dial tcp 192.168.39.125:8443: connect: connection refused"
	I0429 19:55:53.312998       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.125"]
	I0429 19:55:53.387287       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:55:53.387368       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:55:53.387388       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:55:53.390613       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:55:53.390873       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:55:53.390912       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:55:53.392536       1 config.go:192] "Starting service config controller"
	I0429 19:55:53.392582       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:55:53.392620       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:55:53.392626       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:55:53.393602       1 config.go:319] "Starting node config controller"
	I0429 19:55:53.393718       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:55:53.493644       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:55:53.493847       1 shared_informer.go:320] Caches are synced for node config
	I0429 19:55:53.493654       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [47234ed27aca05e31cd0ba2548a24bf607287c87096129b7b3853515e75b3c59] <==
	
	
	==> kube-scheduler [e68158a2ba85b09899867ad6f775ce790cddf72b13a6ef82045ac1af5829e005] <==
	I0429 19:55:50.794778       1 serving.go:380] Generated self-signed cert in-memory
	W0429 19:55:52.045416       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 19:55:52.045843       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0429 19:55:52.046269       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 19:55:52.046427       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 19:55:52.125701       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 19:55:52.126333       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:55:52.131538       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 19:55:52.131742       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 19:55:52.131857       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 19:55:52.131950       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 19:55:52.232714       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 19:55:49 kubernetes-upgrade-935578 kubelet[4056]: W0429 19:55:49.432520    4056 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-935578&limit=500&resourceVersion=0": dial tcp 192.168.39.125:8443: connect: connection refused
	Apr 29 19:55:49 kubernetes-upgrade-935578 kubelet[4056]: E0429 19:55:49.432616    4056 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-935578&limit=500&resourceVersion=0": dial tcp 192.168.39.125:8443: connect: connection refused
	Apr 29 19:55:50 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:50.110426    4056 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-935578"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.141340    4056 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-935578"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.141714    4056 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-935578"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.143707    4056 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.144895    4056 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: E0429 19:55:52.158699    4056 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"kubernetes-upgrade-935578\" not found"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: E0429 19:55:52.259607    4056 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"kubernetes-upgrade-935578\" not found"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: E0429 19:55:52.360599    4056 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"kubernetes-upgrade-935578\" not found"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: E0429 19:55:52.461436    4056 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"kubernetes-upgrade-935578\" not found"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.581787    4056 apiserver.go:52] "Watching apiserver"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.584654    4056 topology_manager.go:215] "Topology Admit Handler" podUID="1278f3a6-bb55-4dac-9289-f4c9d462e19e" podNamespace="kube-system" podName="storage-provisioner"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.584847    4056 topology_manager.go:215] "Topology Admit Handler" podUID="23af4036-e6f8-469a-a3c3-1993d263455e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fpq6t"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.585221    4056 topology_manager.go:215] "Topology Admit Handler" podUID="3f9e7b8e-40f8-49ea-8f88-21d0489f0908" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qdl7m"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.585335    4056 topology_manager.go:215] "Topology Admit Handler" podUID="171130d7-c725-4c93-8fc1-2993b7d44621" podNamespace="kube-system" podName="kube-proxy-7kztm"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.593326    4056 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.599031    4056 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/171130d7-c725-4c93-8fc1-2993b7d44621-xtables-lock\") pod \"kube-proxy-7kztm\" (UID: \"171130d7-c725-4c93-8fc1-2993b7d44621\") " pod="kube-system/kube-proxy-7kztm"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.599223    4056 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/171130d7-c725-4c93-8fc1-2993b7d44621-lib-modules\") pod \"kube-proxy-7kztm\" (UID: \"171130d7-c725-4c93-8fc1-2993b7d44621\") " pod="kube-system/kube-proxy-7kztm"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.599283    4056 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1278f3a6-bb55-4dac-9289-f4c9d462e19e-tmp\") pod \"storage-provisioner\" (UID: \"1278f3a6-bb55-4dac-9289-f4c9d462e19e\") " pod="kube-system/storage-provisioner"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.886270    4056 scope.go:117] "RemoveContainer" containerID="c15390ff5632c02e1365daf305c302470ea5c2bae15183161e5bdbb6bc21a80c"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.886756    4056 scope.go:117] "RemoveContainer" containerID="ea5f22a3dc3a5095c7a9cbba2f9891a65b5d135a12b9f31adf32505da18e3b36"
	Apr 29 19:55:52 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:52.887168    4056 scope.go:117] "RemoveContainer" containerID="27d3d0f4c33ac012c5f184f8a89530e49694fd19185b190c7913eb383656679d"
	Apr 29 19:55:55 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:55.988888    4056 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 29 19:55:57 kubernetes-upgrade-935578 kubelet[4056]: I0429 19:55:57.454247    4056 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [005c0591e99df1af6bc8af6274fcf4482eed403142f29bad998c99fe17ad8a64] <==
	I0429 19:55:53.103263       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 19:55:53.121993       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 19:55:53.122117       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [27d3d0f4c33ac012c5f184f8a89530e49694fd19185b190c7913eb383656679d] <==
	I0429 19:55:30.926955       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0429 19:55:30.929854       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-935578 -n kubernetes-upgrade-935578
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-935578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-935578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-935578
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-935578: (1.548186723s)
--- FAIL: TestKubernetesUpgrade (403.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (62.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-467472 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-467472 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.899729745s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-467472] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Downloading driver docker-machine-driver-kvm2:
	* Starting "pause-467472" primary control-plane node in "pause-467472" cluster
	* Updating the running kvm2 "pause-467472" VM ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-467472" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:54:03.314960   61304 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:54:03.315233   61304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:54:03.315243   61304 out.go:304] Setting ErrFile to fd 2...
	I0429 19:54:03.315247   61304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:54:03.315417   61304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:54:03.315925   61304 out.go:298] Setting JSON to false
	I0429 19:54:03.316912   61304 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5741,"bootTime":1714414702,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 19:54:03.316973   61304 start.go:139] virtualization: kvm guest
	I0429 19:54:03.319365   61304 out.go:177] * [pause-467472] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 19:54:03.320849   61304 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:54:03.320818   61304 notify.go:220] Checking for updates...
	I0429 19:54:03.322257   61304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:54:03.323632   61304 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:54:03.325103   61304 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:54:03.326477   61304 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 19:54:03.329475   61304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:54:03.331390   61304 config.go:182] Loaded profile config "pause-467472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:54:03.331996   61304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version/docker-machine-driver-kvm2
	I0429 19:54:03.332065   61304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:54:03.361753   61304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41867
	I0429 19:54:03.362127   61304 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:54:03.362721   61304 main.go:141] libmachine: Using API Version  1
	I0429 19:54:03.362750   61304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:54:03.363115   61304 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:54:03.363297   61304 main.go:141] libmachine: (pause-467472) Calling .DriverName
	I0429 19:54:03.363511   61304 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:54:03.363805   61304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version/docker-machine-driver-kvm2
	I0429 19:54:03.363858   61304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:54:03.389732   61304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33961
	I0429 19:54:03.390111   61304 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:54:03.390577   61304 main.go:141] libmachine: Using API Version  1
	I0429 19:54:03.390596   61304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:54:03.390874   61304 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:54:03.391031   61304 main.go:141] libmachine: (pause-467472) Calling .DriverName
	I0429 19:54:03.429413   61304 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 19:54:03.430707   61304 start.go:297] selected driver: kvm2
	I0429 19:54:03.430720   61304 start.go:901] validating driver "kvm2" against &{Name:pause-467472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.0 ClusterName:pause-467472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:54:03.430835   61304 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:54:03.431132   61304 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:54:05.747030   61304 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	W0429 19:54:15.775008   61304 install.go:62] docker-machine-driver-kvm2: exit status 1
	I0429 19:54:15.776964   61304 out.go:177] * Downloading driver docker-machine-driver-kvm2:
	I0429 19:54:15.778335   61304 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.33.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.33.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:54:17.788300   61304 cni.go:84] Creating CNI manager for ""
	I0429 19:54:17.788333   61304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:54:17.788405   61304 start.go:340] cluster config:
	{Name:pause-467472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-467472 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:54:17.788526   61304 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:54:17.791081   61304 out.go:177] * Starting "pause-467472" primary control-plane node in "pause-467472" cluster
	I0429 19:54:17.792982   61304 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:54:17.793030   61304 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 19:54:17.793042   61304 cache.go:56] Caching tarball of preloaded images
	I0429 19:54:17.793172   61304 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 19:54:17.793188   61304 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 19:54:17.793353   61304 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/pause-467472/config.json ...
	I0429 19:54:17.793604   61304 start.go:360] acquireMachinesLock for pause-467472: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:54:18.851037   61304 start.go:364] duration metric: took 1.057398288s to acquireMachinesLock for "pause-467472"
	I0429 19:54:18.851101   61304 start.go:96] Skipping create...Using existing machine configuration
	I0429 19:54:18.851110   61304 fix.go:54] fixHost starting: 
	I0429 19:54:18.851440   61304 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:54:18.851484   61304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:54:18.870501   61304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41437
	I0429 19:54:18.871064   61304 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:54:18.871794   61304 main.go:141] libmachine: Using API Version  1
	I0429 19:54:18.871818   61304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:54:18.872307   61304 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:54:18.872508   61304 main.go:141] libmachine: (pause-467472) Calling .DriverName
	I0429 19:54:18.872656   61304 main.go:141] libmachine: (pause-467472) Calling .GetState
	I0429 19:54:18.874384   61304 fix.go:112] recreateIfNeeded on pause-467472: state=Running err=<nil>
	W0429 19:54:18.874405   61304 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 19:54:18.876318   61304 out.go:177] * Updating the running kvm2 "pause-467472" VM ...
	I0429 19:54:18.877747   61304 machine.go:94] provisionDockerMachine start ...
	I0429 19:54:18.877771   61304 main.go:141] libmachine: (pause-467472) Calling .DriverName
	I0429 19:54:18.878006   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHHostname
	I0429 19:54:18.880596   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:18.881021   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:18.881048   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:18.881222   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHPort
	I0429 19:54:18.881414   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:18.881576   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:18.881696   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHUsername
	I0429 19:54:18.881861   61304 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:18.882111   61304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0429 19:54:18.882128   61304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 19:54:18.994265   61304 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-467472
	
	I0429 19:54:18.994303   61304 main.go:141] libmachine: (pause-467472) Calling .GetMachineName
	I0429 19:54:18.994567   61304 buildroot.go:166] provisioning hostname "pause-467472"
	I0429 19:54:18.994598   61304 main.go:141] libmachine: (pause-467472) Calling .GetMachineName
	I0429 19:54:18.994772   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHHostname
	I0429 19:54:18.997480   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:18.998030   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:18.998059   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:18.998161   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHPort
	I0429 19:54:18.998362   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:18.998554   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:18.998748   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHUsername
	I0429 19:54:18.998960   61304 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:18.999188   61304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0429 19:54:18.999208   61304 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-467472 && echo "pause-467472" | sudo tee /etc/hostname
	I0429 19:54:19.149462   61304 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-467472
	
	I0429 19:54:19.149499   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHHostname
	I0429 19:54:19.152388   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:19.152839   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:19.152869   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:19.153011   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHPort
	I0429 19:54:19.153233   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:19.153417   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:19.153596   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHUsername
	I0429 19:54:19.153810   61304 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:19.154047   61304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0429 19:54:19.154086   61304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-467472' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-467472/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-467472' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:54:19.263896   61304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:54:19.263974   61304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:54:19.263996   61304 buildroot.go:174] setting up certificates
	I0429 19:54:19.264006   61304 provision.go:84] configureAuth start
	I0429 19:54:19.264018   61304 main.go:141] libmachine: (pause-467472) Calling .GetMachineName
	I0429 19:54:19.264331   61304 main.go:141] libmachine: (pause-467472) Calling .GetIP
	I0429 19:54:19.267247   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:19.267561   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:19.267592   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:19.267691   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHHostname
	I0429 19:54:19.270216   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:19.270669   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:19.270698   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:19.270846   61304 provision.go:143] copyHostCerts
	I0429 19:54:19.270904   61304 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:54:19.270915   61304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:54:19.270972   61304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:54:19.271092   61304 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:54:19.271104   61304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:54:19.271134   61304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:54:19.271192   61304 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:54:19.271200   61304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:54:19.271225   61304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:54:19.271273   61304 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.pause-467472 san=[127.0.0.1 192.168.50.54 localhost minikube pause-467472]
	I0429 19:54:19.648435   61304 provision.go:177] copyRemoteCerts
	I0429 19:54:19.648512   61304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:54:19.648535   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHHostname
	I0429 19:54:19.651443   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:19.651841   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:19.651875   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:19.652058   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHPort
	I0429 19:54:19.652329   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:19.652553   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHUsername
	I0429 19:54:19.652757   61304 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/pause-467472/id_rsa Username:docker}
	I0429 19:54:19.747258   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:54:19.782514   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:54:19.820296   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0429 19:54:19.855314   61304 provision.go:87] duration metric: took 591.294377ms to configureAuth
	I0429 19:54:19.855342   61304 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:54:19.855525   61304 config.go:182] Loaded profile config "pause-467472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:54:19.855606   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHHostname
	I0429 19:54:19.858701   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:19.859328   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:19.859358   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:19.859631   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHPort
	I0429 19:54:19.859829   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:19.859995   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:19.860156   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHUsername
	I0429 19:54:19.860332   61304 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:19.860493   61304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0429 19:54:19.860507   61304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:54:26.824718   61304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:54:26.824745   61304 machine.go:97] duration metric: took 7.946982325s to provisionDockerMachine
	I0429 19:54:26.824760   61304 start.go:293] postStartSetup for "pause-467472" (driver="kvm2")
	I0429 19:54:26.824773   61304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:54:26.824794   61304 main.go:141] libmachine: (pause-467472) Calling .DriverName
	I0429 19:54:26.825196   61304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:54:26.825247   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHHostname
	I0429 19:54:26.828137   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:26.828643   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:26.828678   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:26.828842   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHPort
	I0429 19:54:26.829043   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:26.829207   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHUsername
	I0429 19:54:26.829366   61304 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/pause-467472/id_rsa Username:docker}
	I0429 19:54:26.918895   61304 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:54:26.925340   61304 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:54:26.925367   61304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:54:26.925432   61304 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:54:26.925528   61304 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:54:26.925642   61304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:54:26.938180   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:54:26.970282   61304 start.go:296] duration metric: took 145.505445ms for postStartSetup
	I0429 19:54:26.970332   61304 fix.go:56] duration metric: took 8.119221635s for fixHost
	I0429 19:54:26.970381   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHHostname
	I0429 19:54:26.973317   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:26.973676   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:26.973718   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:26.973885   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHPort
	I0429 19:54:26.974121   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:26.974327   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:26.974481   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHUsername
	I0429 19:54:26.974651   61304 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:26.974796   61304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I0429 19:54:26.974808   61304 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 19:54:27.080415   61304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714420467.074513464
	
	I0429 19:54:27.080447   61304 fix.go:216] guest clock: 1714420467.074513464
	I0429 19:54:27.080457   61304 fix.go:229] Guest: 2024-04-29 19:54:27.074513464 +0000 UTC Remote: 2024-04-29 19:54:26.970337229 +0000 UTC m=+23.707271119 (delta=104.176235ms)
	I0429 19:54:27.080484   61304 fix.go:200] guest clock delta is within tolerance: 104.176235ms
	I0429 19:54:27.080490   61304 start.go:83] releasing machines lock for "pause-467472", held for 8.229408153s
	I0429 19:54:27.080516   61304 main.go:141] libmachine: (pause-467472) Calling .DriverName
	I0429 19:54:27.080824   61304 main.go:141] libmachine: (pause-467472) Calling .GetIP
	I0429 19:54:27.083922   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:27.084403   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:27.084433   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:27.084590   61304 main.go:141] libmachine: (pause-467472) Calling .DriverName
	I0429 19:54:27.085341   61304 main.go:141] libmachine: (pause-467472) Calling .DriverName
	I0429 19:54:27.085565   61304 main.go:141] libmachine: (pause-467472) Calling .DriverName
	I0429 19:54:27.085715   61304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:54:27.085767   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHHostname
	I0429 19:54:27.085803   61304 ssh_runner.go:195] Run: cat /version.json
	I0429 19:54:27.085829   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHHostname
	I0429 19:54:27.088759   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:27.088994   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:27.089162   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:27.089197   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:27.089337   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:27.089344   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHPort
	I0429 19:54:27.089355   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:27.089555   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:27.089573   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHPort
	I0429 19:54:27.089732   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHUsername
	I0429 19:54:27.089752   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHKeyPath
	I0429 19:54:27.089998   61304 main.go:141] libmachine: (pause-467472) Calling .GetSSHUsername
	I0429 19:54:27.090058   61304 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/pause-467472/id_rsa Username:docker}
	I0429 19:54:27.090369   61304 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/pause-467472/id_rsa Username:docker}
	I0429 19:54:27.172976   61304 ssh_runner.go:195] Run: systemctl --version
	I0429 19:54:27.201317   61304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:54:27.391243   61304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:54:27.410050   61304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:54:27.410157   61304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:54:27.424848   61304 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0429 19:54:27.424879   61304 start.go:494] detecting cgroup driver to use...
	I0429 19:54:27.424959   61304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:54:27.448930   61304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:54:27.468718   61304 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:54:27.468794   61304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:54:27.500758   61304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:54:27.533116   61304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:54:27.857153   61304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:54:28.374917   61304 docker.go:233] disabling docker service ...
	I0429 19:54:28.374998   61304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:54:28.550024   61304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:54:28.629865   61304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:54:28.935987   61304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:54:29.172910   61304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:54:29.188978   61304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:54:29.218499   61304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 19:54:29.218574   61304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:29.239302   61304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:54:29.239378   61304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:29.256598   61304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:29.288869   61304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:29.315789   61304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:54:29.354647   61304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:29.389376   61304 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:29.442631   61304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:29.466514   61304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:54:29.488162   61304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:54:29.507261   61304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:54:29.850235   61304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:54:30.499837   61304 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:54:30.499949   61304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:54:30.509645   61304 start.go:562] Will wait 60s for crictl version
	I0429 19:54:30.509730   61304 ssh_runner.go:195] Run: which crictl
	I0429 19:54:30.515676   61304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:54:30.578944   61304 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:54:30.579052   61304 ssh_runner.go:195] Run: crio --version
	I0429 19:54:30.622783   61304 ssh_runner.go:195] Run: crio --version
	I0429 19:54:30.866610   61304 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 19:54:30.867989   61304 main.go:141] libmachine: (pause-467472) Calling .GetIP
	I0429 19:54:30.871727   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:30.872144   61304 main.go:141] libmachine: (pause-467472) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:19:55", ip: ""} in network mk-pause-467472: {Iface:virbr2 ExpiryTime:2024-04-29 20:53:19 +0000 UTC Type:0 Mac:52:54:00:12:19:55 Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:pause-467472 Clientid:01:52:54:00:12:19:55}
	I0429 19:54:30.872211   61304 main.go:141] libmachine: (pause-467472) DBG | domain pause-467472 has defined IP address 192.168.50.54 and MAC address 52:54:00:12:19:55 in network mk-pause-467472
	I0429 19:54:30.872459   61304 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0429 19:54:30.981927   61304 kubeadm.go:877] updating cluster {Name:pause-467472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:pause-467472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 19:54:30.982178   61304 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:54:30.982265   61304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:54:31.216829   61304 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 19:54:31.216855   61304 crio.go:433] Images already preloaded, skipping extraction
	I0429 19:54:31.216917   61304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:54:31.324200   61304 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 19:54:31.324228   61304 cache_images.go:84] Images are preloaded, skipping loading
	I0429 19:54:31.324239   61304 kubeadm.go:928] updating node { 192.168.50.54 8443 v1.30.0 crio true true} ...
	I0429 19:54:31.324403   61304 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-467472 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-467472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:54:31.324499   61304 ssh_runner.go:195] Run: crio config
	I0429 19:54:31.387289   61304 cni.go:84] Creating CNI manager for ""
	I0429 19:54:31.387317   61304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:54:31.387329   61304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 19:54:31.387360   61304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.54 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-467472 NodeName:pause-467472 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 19:54:31.387584   61304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-467472"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 19:54:31.387669   61304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:54:31.407669   61304 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 19:54:31.407760   61304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 19:54:31.428351   61304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0429 19:54:31.462527   61304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:54:31.501628   61304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0429 19:54:31.532890   61304 ssh_runner.go:195] Run: grep 192.168.50.54	control-plane.minikube.internal$ /etc/hosts
	I0429 19:54:31.537859   61304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:54:31.759260   61304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:54:31.788930   61304 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/pause-467472 for IP: 192.168.50.54
	I0429 19:54:31.788956   61304 certs.go:194] generating shared ca certs ...
	I0429 19:54:31.788972   61304 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:54:31.789166   61304 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:54:31.789236   61304 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:54:31.789251   61304 certs.go:256] generating profile certs ...
	I0429 19:54:31.789370   61304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/pause-467472/client.key
	I0429 19:54:31.789450   61304 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/pause-467472/apiserver.key.84823e0b
	I0429 19:54:31.789500   61304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/pause-467472/proxy-client.key
	I0429 19:54:31.789632   61304 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:54:31.789675   61304 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:54:31.789689   61304 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:54:31.789723   61304 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:54:31.789757   61304 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:54:31.789788   61304 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:54:31.789854   61304 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:54:31.790771   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:54:31.840497   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:54:31.870860   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:54:31.900865   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:54:31.930218   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/pause-467472/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0429 19:54:31.959431   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/pause-467472/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 19:54:31.991796   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/pause-467472/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:54:32.022941   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/pause-467472/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 19:54:32.054246   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:54:32.089623   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:54:32.121861   61304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:54:32.153362   61304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 19:54:32.173609   61304 ssh_runner.go:195] Run: openssl version
	I0429 19:54:32.180770   61304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:54:32.194952   61304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:54:32.200666   61304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:54:32.200730   61304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:54:32.208180   61304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:54:32.219460   61304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:54:32.232415   61304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:54:32.239080   61304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:54:32.239169   61304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:54:32.246838   61304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:54:32.258131   61304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:54:32.273518   61304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:54:32.279262   61304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:54:32.279329   61304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:54:32.286828   61304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:54:32.298929   61304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:54:32.306450   61304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 19:54:32.315400   61304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 19:54:32.324209   61304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 19:54:32.333183   61304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 19:54:32.342269   61304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 19:54:32.349895   61304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 19:54:32.356604   61304 kubeadm.go:391] StartCluster: {Name:pause-467472 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:pause-467472 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:54:32.356703   61304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 19:54:32.356776   61304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 19:54:32.399669   61304 cri.go:89] found id: "a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474"
	I0429 19:54:32.399696   61304 cri.go:89] found id: "80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03"
	I0429 19:54:32.399700   61304 cri.go:89] found id: "cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127"
	I0429 19:54:32.399703   61304 cri.go:89] found id: "5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f"
	I0429 19:54:32.399706   61304 cri.go:89] found id: "0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878"
	I0429 19:54:32.399710   61304 cri.go:89] found id: "13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98"
	I0429 19:54:32.399714   61304 cri.go:89] found id: ""
	I0429 19:54:32.399774   61304 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-467472 -n pause-467472
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-467472 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-467472 logs -n 25: (4.233428204s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo docker                         | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo find                           | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo crio                           | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-870155                                     | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC | 29 Apr 24 19:54 UTC |
	| start   | -p pause-467472                                      | pause-467472              | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC | 29 Apr 24 19:54 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-407092                            | running-upgrade-407092    | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC | 29 Apr 24 19:54 UTC |
	| start   | -p cert-expiration-509508                            | cert-expiration-509508    | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-090341                         | force-systemd-flag-090341 | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-935578                         | kubernetes-upgrade-935578 | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-935578                         | kubernetes-upgrade-935578 | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 19:54:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 19:54:36.637836   61801 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:54:36.637995   61801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:54:36.638007   61801 out.go:304] Setting ErrFile to fd 2...
	I0429 19:54:36.638023   61801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:54:36.638322   61801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:54:36.639012   61801 out.go:298] Setting JSON to false
	I0429 19:54:36.640292   61801 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5775,"bootTime":1714414702,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 19:54:36.640374   61801 start.go:139] virtualization: kvm guest
	I0429 19:54:36.642541   61801 out.go:177] * [kubernetes-upgrade-935578] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 19:54:36.644283   61801 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:54:36.644334   61801 notify.go:220] Checking for updates...
	I0429 19:54:36.645679   61801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:54:36.646956   61801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:54:36.648216   61801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:54:36.649484   61801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 19:54:36.650701   61801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:54:36.652416   61801 config.go:182] Loaded profile config "kubernetes-upgrade-935578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:54:36.652976   61801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:54:36.653037   61801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:54:36.668570   61801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
	I0429 19:54:36.669018   61801 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:54:36.669690   61801 main.go:141] libmachine: Using API Version  1
	I0429 19:54:36.669752   61801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:54:36.670111   61801 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:54:36.670310   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:54:36.670570   61801 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:54:36.670885   61801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:54:36.670924   61801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:54:36.686757   61801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0429 19:54:36.687200   61801 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:54:36.687706   61801 main.go:141] libmachine: Using API Version  1
	I0429 19:54:36.687739   61801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:54:36.688118   61801 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:54:36.688420   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:54:36.731472   61801 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 19:54:36.732931   61801 start.go:297] selected driver: kvm2
	I0429 19:54:36.732953   61801 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-935578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-935578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:54:36.733112   61801 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:54:36.734166   61801 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:54:36.734266   61801 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 19:54:36.752334   61801 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 19:54:36.752714   61801 cni.go:84] Creating CNI manager for ""
	I0429 19:54:36.752733   61801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:54:36.752790   61801 start.go:340] cluster config:
	{Name:kubernetes-upgrade-935578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-935578 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:54:36.752906   61801 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:54:36.755612   61801 out.go:177] * Starting "kubernetes-upgrade-935578" primary control-plane node in "kubernetes-upgrade-935578" cluster
	I0429 19:54:34.118158   61304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.242227495s)
	I0429 19:54:34.118195   61304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:54:34.364403   61304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:54:34.446089   61304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:54:34.565464   61304 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:54:34.565554   61304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:54:35.066291   61304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:54:35.565747   61304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:54:35.584006   61304 api_server.go:72] duration metric: took 1.018541203s to wait for apiserver process to appear ...
	I0429 19:54:35.584037   61304 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:54:35.584059   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:35.733654   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:35.734418   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | unable to find current IP address of domain cert-expiration-509508 in network mk-cert-expiration-509508
	I0429 19:54:35.734435   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | I0429 19:54:35.734232   61621 retry.go:31] will retry after 2.119212845s: waiting for machine to come up
	I0429 19:54:37.855993   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:37.856491   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | unable to find current IP address of domain cert-expiration-509508 in network mk-cert-expiration-509508
	I0429 19:54:37.856513   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | I0429 19:54:37.856452   61621 retry.go:31] will retry after 2.524229713s: waiting for machine to come up
	I0429 19:54:38.841293   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 19:54:38.841328   61304 api_server.go:103] status: https://192.168.50.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 19:54:38.841343   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:38.875266   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 19:54:38.875302   61304 api_server.go:103] status: https://192.168.50.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 19:54:39.084855   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:39.090284   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:54:39.090320   61304 api_server.go:103] status: https://192.168.50.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:54:39.584948   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:39.596854   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:54:39.596885   61304 api_server.go:103] status: https://192.168.50.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:54:40.085087   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:40.094581   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:54:40.094613   61304 api_server.go:103] status: https://192.168.50.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:54:40.584231   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:40.588861   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 200:
	ok
	I0429 19:54:40.596661   61304 api_server.go:141] control plane version: v1.30.0
	I0429 19:54:40.596695   61304 api_server.go:131] duration metric: took 5.012650451s to wait for apiserver health ...
	I0429 19:54:40.596707   61304 cni.go:84] Creating CNI manager for ""
	I0429 19:54:40.596715   61304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:54:40.598645   61304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 19:54:36.756824   61801 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:54:36.756873   61801 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 19:54:36.756889   61801 cache.go:56] Caching tarball of preloaded images
	I0429 19:54:36.757002   61801 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 19:54:36.757018   61801 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 19:54:36.757139   61801 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/config.json ...
	I0429 19:54:36.757409   61801 start.go:360] acquireMachinesLock for kubernetes-upgrade-935578: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:54:40.600192   61304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 19:54:40.615497   61304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 19:54:40.638015   61304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:54:40.650678   61304 system_pods.go:59] 6 kube-system pods found
	I0429 19:54:40.650733   61304 system_pods.go:61] "coredns-7db6d8ff4d-lxtq2" [db9d4855-6b30-41d8-b97d-2e8bab9e7135] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 19:54:40.650747   61304 system_pods.go:61] "etcd-pause-467472" [c4fcb8eb-d378-4229-b3d2-bf0d8da6d4a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 19:54:40.650759   61304 system_pods.go:61] "kube-apiserver-pause-467472" [52345820-7c48-453b-8b9a-1c837d664ea7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 19:54:40.650771   61304 system_pods.go:61] "kube-controller-manager-pause-467472" [7cabe046-247c-4cda-83e2-3a34ebf9db66] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 19:54:40.650781   61304 system_pods.go:61] "kube-proxy-2brrw" [dc85d0aa-db2c-4c9a-a318-19fd8634c217] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 19:54:40.650788   61304 system_pods.go:61] "kube-scheduler-pause-467472" [0455fda0-9152-4212-97f5-764a57a328dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 19:54:40.650808   61304 system_pods.go:74] duration metric: took 12.76494ms to wait for pod list to return data ...
	I0429 19:54:40.650832   61304 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:54:40.654908   61304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:54:40.654948   61304 node_conditions.go:123] node cpu capacity is 2
	I0429 19:54:40.654963   61304 node_conditions.go:105] duration metric: took 4.119961ms to run NodePressure ...
	I0429 19:54:40.654986   61304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:54:40.952883   61304 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 19:54:40.959638   61304 kubeadm.go:733] kubelet initialised
	I0429 19:54:40.959670   61304 kubeadm.go:734] duration metric: took 6.753272ms waiting for restarted kubelet to initialise ...
	I0429 19:54:40.959679   61304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:54:40.968789   61304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:42.975963   61304 pod_ready.go:102] pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace has status "Ready":"False"
	I0429 19:54:40.384100   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:40.384625   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | unable to find current IP address of domain cert-expiration-509508 in network mk-cert-expiration-509508
	I0429 19:54:40.384648   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | I0429 19:54:40.384586   61621 retry.go:31] will retry after 2.83087137s: waiting for machine to come up
	I0429 19:54:43.216864   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:43.217395   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | unable to find current IP address of domain cert-expiration-509508 in network mk-cert-expiration-509508
	I0429 19:54:43.217410   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | I0429 19:54:43.217319   61621 retry.go:31] will retry after 2.889221716s: waiting for machine to come up
	I0429 19:54:45.477042   61304 pod_ready.go:102] pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace has status "Ready":"False"
	I0429 19:54:47.976991   61304 pod_ready.go:102] pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace has status "Ready":"False"
	I0429 19:54:46.110459   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:46.110948   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | unable to find current IP address of domain cert-expiration-509508 in network mk-cert-expiration-509508
	I0429 19:54:46.110970   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | I0429 19:54:46.110900   61621 retry.go:31] will retry after 5.231259953s: waiting for machine to come up
	I0429 19:54:52.939537   61545 start.go:364] duration metric: took 33.088858515s to acquireMachinesLock for "force-systemd-flag-090341"
	I0429 19:54:52.939599   61545 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-090341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-090341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:54:52.941175   61545 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 19:54:48.976580   61304 pod_ready.go:92] pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:48.976605   61304 pod_ready.go:81] duration metric: took 8.007783761s for pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:48.976614   61304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:50.983258   61304 pod_ready.go:102] pod "etcd-pause-467472" in "kube-system" namespace has status "Ready":"False"
	I0429 19:54:52.987271   61304 pod_ready.go:102] pod "etcd-pause-467472" in "kube-system" namespace has status "Ready":"False"
	I0429 19:54:51.343596   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.344112   61469 main.go:141] libmachine: (cert-expiration-509508) Found IP for machine: 192.168.61.227
	I0429 19:54:51.344125   61469 main.go:141] libmachine: (cert-expiration-509508) Reserving static IP address...
	I0429 19:54:51.344133   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has current primary IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.344503   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | unable to find host DHCP lease matching {name: "cert-expiration-509508", mac: "52:54:00:a6:1a:b3", ip: "192.168.61.227"} in network mk-cert-expiration-509508
	I0429 19:54:51.417592   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | Getting to WaitForSSH function...
	I0429 19:54:51.417619   61469 main.go:141] libmachine: (cert-expiration-509508) Reserved static IP address: 192.168.61.227
	I0429 19:54:51.417633   61469 main.go:141] libmachine: (cert-expiration-509508) Waiting for SSH to be available...
	I0429 19:54:51.420434   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.420912   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:51.420970   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.421090   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | Using SSH client type: external
	I0429 19:54:51.421106   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/cert-expiration-509508/id_rsa (-rw-------)
	I0429 19:54:51.421135   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/cert-expiration-509508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:54:51.421143   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | About to run SSH command:
	I0429 19:54:51.421153   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | exit 0
	I0429 19:54:51.547466   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | SSH cmd err, output: <nil>: 
	I0429 19:54:51.547730   61469 main.go:141] libmachine: (cert-expiration-509508) KVM machine creation complete!
	I0429 19:54:51.548002   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetConfigRaw
	I0429 19:54:51.548683   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:51.548888   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:51.549111   61469 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 19:54:51.549121   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetState
	I0429 19:54:51.550594   61469 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 19:54:51.550604   61469 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 19:54:51.550610   61469 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 19:54:51.550618   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:51.553389   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.553713   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:51.553740   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.553865   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:51.554042   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.554210   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.554366   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:51.554532   61469 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:51.554728   61469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0429 19:54:51.554733   61469 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 19:54:51.658578   61469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:54:51.658589   61469 main.go:141] libmachine: Detecting the provisioner...
	I0429 19:54:51.658595   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:51.661685   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.662148   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:51.662168   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.662397   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:51.662625   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.662806   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.662991   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:51.663147   61469 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:51.663353   61469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0429 19:54:51.663362   61469 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 19:54:51.767557   61469 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 19:54:51.767648   61469 main.go:141] libmachine: found compatible host: buildroot
	I0429 19:54:51.767656   61469 main.go:141] libmachine: Provisioning with buildroot...
	I0429 19:54:51.767667   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetMachineName
	I0429 19:54:51.767932   61469 buildroot.go:166] provisioning hostname "cert-expiration-509508"
	I0429 19:54:51.767949   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetMachineName
	I0429 19:54:51.768153   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:51.770922   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.771314   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:51.771335   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.771466   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:51.771642   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.771792   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.771917   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:51.772050   61469 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:51.772221   61469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0429 19:54:51.772227   61469 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-509508 && echo "cert-expiration-509508" | sudo tee /etc/hostname
	I0429 19:54:51.891046   61469 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-509508
	
	I0429 19:54:51.891064   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:51.893645   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.894045   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:51.894099   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.894276   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:51.894484   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.894603   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.894751   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:51.894891   61469 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:51.895055   61469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0429 19:54:51.895068   61469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-509508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-509508/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-509508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:54:52.008825   61469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:54:52.008841   61469 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:54:52.008868   61469 buildroot.go:174] setting up certificates
	I0429 19:54:52.008881   61469 provision.go:84] configureAuth start
	I0429 19:54:52.008892   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetMachineName
	I0429 19:54:52.009180   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetIP
	I0429 19:54:52.011847   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.012204   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.012220   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.012370   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.014837   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.015147   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.015163   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.015304   61469 provision.go:143] copyHostCerts
	I0429 19:54:52.015366   61469 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:54:52.015373   61469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:54:52.015440   61469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:54:52.015607   61469 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:54:52.015613   61469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:54:52.015645   61469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:54:52.015735   61469 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:54:52.015740   61469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:54:52.015763   61469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:54:52.015841   61469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-509508 san=[127.0.0.1 192.168.61.227 cert-expiration-509508 localhost minikube]
	I0429 19:54:52.214998   61469 provision.go:177] copyRemoteCerts
	I0429 19:54:52.215051   61469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:54:52.215071   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.217776   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.218120   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.218147   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.218319   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:52.218487   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.218626   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:52.218754   61469 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/cert-expiration-509508/id_rsa Username:docker}
	I0429 19:54:52.303472   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:54:52.331964   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0429 19:54:52.359334   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 19:54:52.387088   61469 provision.go:87] duration metric: took 378.197066ms to configureAuth
	I0429 19:54:52.387106   61469 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:54:52.387277   61469 config.go:182] Loaded profile config "cert-expiration-509508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:54:52.387355   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.390131   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.390515   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.390538   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.390716   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:52.390903   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.391056   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.391164   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:52.391330   61469 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:52.391484   61469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0429 19:54:52.391493   61469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:54:52.692586   61469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:54:52.692600   61469 main.go:141] libmachine: Checking connection to Docker...
	I0429 19:54:52.692610   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetURL
	I0429 19:54:52.694022   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | Using libvirt version 6000000
	I0429 19:54:52.696314   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.696594   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.696618   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.696849   61469 main.go:141] libmachine: Docker is up and running!
	I0429 19:54:52.696857   61469 main.go:141] libmachine: Reticulating splines...
	I0429 19:54:52.696862   61469 client.go:171] duration metric: took 25.591340389s to LocalClient.Create
	I0429 19:54:52.696884   61469 start.go:167] duration metric: took 25.591405786s to libmachine.API.Create "cert-expiration-509508"
	I0429 19:54:52.696891   61469 start.go:293] postStartSetup for "cert-expiration-509508" (driver="kvm2")
	I0429 19:54:52.696904   61469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:54:52.696922   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:52.697162   61469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:54:52.697179   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.700011   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.700388   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.700411   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.700542   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:52.700728   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.700871   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:52.701026   61469 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/cert-expiration-509508/id_rsa Username:docker}
	I0429 19:54:52.782704   61469 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:54:52.787995   61469 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:54:52.788012   61469 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:54:52.788091   61469 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:54:52.788190   61469 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:54:52.788311   61469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:54:52.800390   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:54:52.830933   61469 start.go:296] duration metric: took 134.029108ms for postStartSetup
	I0429 19:54:52.830978   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetConfigRaw
	I0429 19:54:52.831724   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetIP
	I0429 19:54:52.834527   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.834913   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.834930   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.835213   61469 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/config.json ...
	I0429 19:54:52.835402   61469 start.go:128] duration metric: took 25.754630638s to createHost
	I0429 19:54:52.835421   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.837896   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.838281   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.838300   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.838431   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:52.838600   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.838770   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.838941   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:52.839169   61469 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:52.839328   61469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0429 19:54:52.839335   61469 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:54:52.939414   61469 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714420492.919345164
	
	I0429 19:54:52.939426   61469 fix.go:216] guest clock: 1714420492.919345164
	I0429 19:54:52.939434   61469 fix.go:229] Guest: 2024-04-29 19:54:52.919345164 +0000 UTC Remote: 2024-04-29 19:54:52.835408361 +0000 UTC m=+43.382949359 (delta=83.936803ms)
	I0429 19:54:52.939457   61469 fix.go:200] guest clock delta is within tolerance: 83.936803ms
	I0429 19:54:52.939463   61469 start.go:83] releasing machines lock for "cert-expiration-509508", held for 25.858854881s
	I0429 19:54:52.939489   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:52.939784   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetIP
	I0429 19:54:52.943449   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.943823   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.943844   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.944021   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:52.944532   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:52.944704   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:52.944799   61469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:54:52.944832   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.944905   61469 ssh_runner.go:195] Run: cat /version.json
	I0429 19:54:52.944917   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.948388   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.948675   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.948884   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.948897   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.949085   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:52.949109   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.949128   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.949240   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.949348   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:52.949413   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:52.949528   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.949601   61469 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/cert-expiration-509508/id_rsa Username:docker}
	I0429 19:54:52.949887   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:52.950035   61469 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/cert-expiration-509508/id_rsa Username:docker}
	I0429 19:54:53.059030   61469 ssh_runner.go:195] Run: systemctl --version
	I0429 19:54:53.067582   61469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:54:53.245086   61469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:54:53.253409   61469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:54:53.253484   61469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:54:53.272160   61469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:54:53.272177   61469 start.go:494] detecting cgroup driver to use...
	I0429 19:54:53.272259   61469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:54:53.292874   61469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:54:53.309571   61469 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:54:53.309631   61469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:54:53.329917   61469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:54:53.350168   61469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:54:53.491686   61469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:54:53.652022   61469 docker.go:233] disabling docker service ...
	I0429 19:54:53.652087   61469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:54:53.670214   61469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:54:53.686954   61469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:54:53.843358   61469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:54:53.997569   61469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:54:54.016046   61469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:54:54.042400   61469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 19:54:54.042453   61469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.054457   61469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:54:54.054515   61469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.067690   61469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.079318   61469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.091529   61469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:54:54.103318   61469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.115430   61469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.135219   61469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.147282   61469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:54:54.157795   61469 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 19:54:54.157860   61469 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 19:54:54.173866   61469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:54:54.187698   61469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:54:54.346354   61469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:54:54.493804   61304 pod_ready.go:92] pod "etcd-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:54.493838   61304 pod_ready.go:81] duration metric: took 5.517216695s for pod "etcd-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.493852   61304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.500971   61304 pod_ready.go:92] pod "kube-apiserver-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:54.500999   61304 pod_ready.go:81] duration metric: took 7.138665ms for pod "kube-apiserver-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.501012   61304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.506242   61304 pod_ready.go:92] pod "kube-controller-manager-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:54.506268   61304 pod_ready.go:81] duration metric: took 5.247358ms for pod "kube-controller-manager-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.506280   61304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2brrw" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.512205   61304 pod_ready.go:92] pod "kube-proxy-2brrw" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:54.512224   61304 pod_ready.go:81] duration metric: took 5.935782ms for pod "kube-proxy-2brrw" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.512234   61304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.518188   61304 pod_ready.go:92] pod "kube-scheduler-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:54.518212   61304 pod_ready.go:81] duration metric: took 5.97113ms for pod "kube-scheduler-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.518221   61304 pod_ready.go:38] duration metric: took 13.558530768s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:54:54.518241   61304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 19:54:54.533450   61304 ops.go:34] apiserver oom_adj: -16
	I0429 19:54:54.533469   61304 kubeadm.go:591] duration metric: took 22.073541311s to restartPrimaryControlPlane
	I0429 19:54:54.533479   61304 kubeadm.go:393] duration metric: took 22.176881709s to StartCluster
	I0429 19:54:54.533496   61304 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:54:54.533573   61304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:54:54.534577   61304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:54:54.534848   61304 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:54:54.537639   61304 out.go:177] * Verifying Kubernetes components...
	I0429 19:54:54.534980   61304 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 19:54:54.535174   61304 config.go:182] Loaded profile config "pause-467472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:54:54.539254   61304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:54:54.540605   61304 out.go:177] * Enabled addons: 
	I0429 19:54:54.517908   61469 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:54:54.517983   61469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:54:54.524804   61469 start.go:562] Will wait 60s for crictl version
	I0429 19:54:54.524872   61469 ssh_runner.go:195] Run: which crictl
	I0429 19:54:54.531801   61469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:54:54.586173   61469 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:54:54.586253   61469 ssh_runner.go:195] Run: crio --version
	I0429 19:54:54.632689   61469 ssh_runner.go:195] Run: crio --version
	I0429 19:54:54.674233   61469 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 19:54:52.943005   61545 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0429 19:54:52.943201   61545 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:54:52.943236   61545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:54:52.960643   61545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45587
	I0429 19:54:52.961063   61545 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:54:52.961595   61545 main.go:141] libmachine: Using API Version  1
	I0429 19:54:52.961615   61545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:54:52.962013   61545 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:54:52.962272   61545 main.go:141] libmachine: (force-systemd-flag-090341) Calling .GetMachineName
	I0429 19:54:52.962453   61545 main.go:141] libmachine: (force-systemd-flag-090341) Calling .DriverName
	I0429 19:54:52.962614   61545 start.go:159] libmachine.API.Create for "force-systemd-flag-090341" (driver="kvm2")
	I0429 19:54:52.962641   61545 client.go:168] LocalClient.Create starting
	I0429 19:54:52.962676   61545 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 19:54:52.962716   61545 main.go:141] libmachine: Decoding PEM data...
	I0429 19:54:52.962737   61545 main.go:141] libmachine: Parsing certificate...
	I0429 19:54:52.962815   61545 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 19:54:52.962849   61545 main.go:141] libmachine: Decoding PEM data...
	I0429 19:54:52.962867   61545 main.go:141] libmachine: Parsing certificate...
	I0429 19:54:52.962893   61545 main.go:141] libmachine: Running pre-create checks...
	I0429 19:54:52.962906   61545 main.go:141] libmachine: (force-systemd-flag-090341) Calling .PreCreateCheck
	I0429 19:54:52.963306   61545 main.go:141] libmachine: (force-systemd-flag-090341) Calling .GetConfigRaw
	I0429 19:54:52.963740   61545 main.go:141] libmachine: Creating machine...
	I0429 19:54:52.963756   61545 main.go:141] libmachine: (force-systemd-flag-090341) Calling .Create
	I0429 19:54:52.963890   61545 main.go:141] libmachine: (force-systemd-flag-090341) Creating KVM machine...
	I0429 19:54:52.965039   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | found existing default KVM network
	I0429 19:54:52.966216   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:52.966008   61938 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:31:bc:7c} reservation:<nil>}
	I0429 19:54:52.967036   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:52.966954   61938 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e5:37:4d} reservation:<nil>}
	I0429 19:54:52.968134   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:52.968045   61938 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:99:a1:58} reservation:<nil>}
	I0429 19:54:52.969494   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:52.969417   61938 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002899b0}
	I0429 19:54:52.969540   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | created network xml: 
	I0429 19:54:52.969564   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | <network>
	I0429 19:54:52.969578   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |   <name>mk-force-systemd-flag-090341</name>
	I0429 19:54:52.969597   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |   <dns enable='no'/>
	I0429 19:54:52.969620   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |   
	I0429 19:54:52.969642   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0429 19:54:52.969652   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |     <dhcp>
	I0429 19:54:52.969663   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0429 19:54:52.969669   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |     </dhcp>
	I0429 19:54:52.969674   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |   </ip>
	I0429 19:54:52.969680   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |   
	I0429 19:54:52.969685   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | </network>
	I0429 19:54:52.969692   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | 
	I0429 19:54:52.975119   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | trying to create private KVM network mk-force-systemd-flag-090341 192.168.72.0/24...
	I0429 19:54:53.049705   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | private KVM network mk-force-systemd-flag-090341 192.168.72.0/24 created
	I0429 19:54:53.049742   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341 ...
	I0429 19:54:53.049759   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:53.049644   61938 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:54:53.049809   61545 main.go:141] libmachine: (force-systemd-flag-090341) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 19:54:53.049847   61545 main.go:141] libmachine: (force-systemd-flag-090341) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 19:54:53.280389   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:53.280214   61938 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341/id_rsa...
	I0429 19:54:53.369397   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:53.369239   61938 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341/force-systemd-flag-090341.rawdisk...
	I0429 19:54:53.369435   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Writing magic tar header
	I0429 19:54:53.369455   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Writing SSH key tar header
	I0429 19:54:53.369474   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:53.369360   61938 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341 ...
	I0429 19:54:53.369491   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341 (perms=drwx------)
	I0429 19:54:53.369512   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341
	I0429 19:54:53.369550   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 19:54:53.369575   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 19:54:53.369590   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:54:53.369604   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 19:54:53.369618   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 19:54:53.369635   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home/jenkins
	I0429 19:54:53.369649   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home
	I0429 19:54:53.369664   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 19:54:53.369679   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 19:54:53.369692   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 19:54:53.369706   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 19:54:53.369714   61545 main.go:141] libmachine: (force-systemd-flag-090341) Creating domain...
	I0429 19:54:53.369729   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Skipping /home - not owner
	I0429 19:54:53.371595   61545 main.go:141] libmachine: (force-systemd-flag-090341) define libvirt domain using xml: 
	I0429 19:54:53.371621   61545 main.go:141] libmachine: (force-systemd-flag-090341) <domain type='kvm'>
	I0429 19:54:53.371664   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <name>force-systemd-flag-090341</name>
	I0429 19:54:53.371689   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <memory unit='MiB'>2048</memory>
	I0429 19:54:53.371703   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <vcpu>2</vcpu>
	I0429 19:54:53.371715   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <features>
	I0429 19:54:53.371735   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <acpi/>
	I0429 19:54:53.371746   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <apic/>
	I0429 19:54:53.371753   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <pae/>
	I0429 19:54:53.371760   61545 main.go:141] libmachine: (force-systemd-flag-090341)     
	I0429 19:54:53.371769   61545 main.go:141] libmachine: (force-systemd-flag-090341)   </features>
	I0429 19:54:53.371776   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <cpu mode='host-passthrough'>
	I0429 19:54:53.371792   61545 main.go:141] libmachine: (force-systemd-flag-090341)   
	I0429 19:54:53.371799   61545 main.go:141] libmachine: (force-systemd-flag-090341)   </cpu>
	I0429 19:54:53.371807   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <os>
	I0429 19:54:53.371814   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <type>hvm</type>
	I0429 19:54:53.371822   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <boot dev='cdrom'/>
	I0429 19:54:53.371829   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <boot dev='hd'/>
	I0429 19:54:53.371850   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <bootmenu enable='no'/>
	I0429 19:54:53.371858   61545 main.go:141] libmachine: (force-systemd-flag-090341)   </os>
	I0429 19:54:53.371866   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <devices>
	I0429 19:54:53.371874   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <disk type='file' device='cdrom'>
	I0429 19:54:53.371886   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341/boot2docker.iso'/>
	I0429 19:54:53.371896   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <target dev='hdc' bus='scsi'/>
	I0429 19:54:53.371905   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <readonly/>
	I0429 19:54:53.371911   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </disk>
	I0429 19:54:53.371920   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <disk type='file' device='disk'>
	I0429 19:54:53.371930   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 19:54:53.371943   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341/force-systemd-flag-090341.rawdisk'/>
	I0429 19:54:53.371951   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <target dev='hda' bus='virtio'/>
	I0429 19:54:53.371983   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </disk>
	I0429 19:54:53.371999   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <interface type='network'>
	I0429 19:54:53.372014   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <source network='mk-force-systemd-flag-090341'/>
	I0429 19:54:53.372027   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <model type='virtio'/>
	I0429 19:54:53.372040   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </interface>
	I0429 19:54:53.372052   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <interface type='network'>
	I0429 19:54:53.372066   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <source network='default'/>
	I0429 19:54:53.372077   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <model type='virtio'/>
	I0429 19:54:53.372087   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </interface>
	I0429 19:54:53.372099   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <serial type='pty'>
	I0429 19:54:53.372112   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <target port='0'/>
	I0429 19:54:53.372127   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </serial>
	I0429 19:54:53.372142   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <console type='pty'>
	I0429 19:54:53.372154   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <target type='serial' port='0'/>
	I0429 19:54:53.372168   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </console>
	I0429 19:54:53.372179   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <rng model='virtio'>
	I0429 19:54:53.372191   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <backend model='random'>/dev/random</backend>
	I0429 19:54:53.372200   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </rng>
	I0429 19:54:53.372212   61545 main.go:141] libmachine: (force-systemd-flag-090341)     
	I0429 19:54:53.372223   61545 main.go:141] libmachine: (force-systemd-flag-090341)     
	I0429 19:54:53.372235   61545 main.go:141] libmachine: (force-systemd-flag-090341)   </devices>
	I0429 19:54:53.372246   61545 main.go:141] libmachine: (force-systemd-flag-090341) </domain>
	I0429 19:54:53.372259   61545 main.go:141] libmachine: (force-systemd-flag-090341) 
	I0429 19:54:53.377894   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:da:70:cb in network default
	I0429 19:54:53.378555   61545 main.go:141] libmachine: (force-systemd-flag-090341) Ensuring networks are active...
	I0429 19:54:53.378592   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:db:9f:a1 in network mk-force-systemd-flag-090341
	I0429 19:54:53.379287   61545 main.go:141] libmachine: (force-systemd-flag-090341) Ensuring network default is active
	I0429 19:54:53.379595   61545 main.go:141] libmachine: (force-systemd-flag-090341) Ensuring network mk-force-systemd-flag-090341 is active
	I0429 19:54:53.380183   61545 main.go:141] libmachine: (force-systemd-flag-090341) Getting domain xml...
	I0429 19:54:53.380837   61545 main.go:141] libmachine: (force-systemd-flag-090341) Creating domain...
	I0429 19:54:54.712069   61545 main.go:141] libmachine: (force-systemd-flag-090341) Waiting to get IP...
	I0429 19:54:54.712949   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:db:9f:a1 in network mk-force-systemd-flag-090341
	I0429 19:54:54.713407   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | unable to find current IP address of domain force-systemd-flag-090341 in network mk-force-systemd-flag-090341
	I0429 19:54:54.713466   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:54.713402   61938 retry.go:31] will retry after 231.042588ms: waiting for machine to come up
	I0429 19:54:54.541902   61304 addons.go:505] duration metric: took 6.937577ms for enable addons: enabled=[]
	I0429 19:54:54.758273   61304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:54:54.783638   61304 node_ready.go:35] waiting up to 6m0s for node "pause-467472" to be "Ready" ...
	I0429 19:54:54.787713   61304 node_ready.go:49] node "pause-467472" has status "Ready":"True"
	I0429 19:54:54.787738   61304 node_ready.go:38] duration metric: took 4.064321ms for node "pause-467472" to be "Ready" ...
	I0429 19:54:54.787750   61304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:54:54.888608   61304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:55.283354   61304 pod_ready.go:92] pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:55.283385   61304 pod_ready.go:81] duration metric: took 394.745277ms for pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:55.283398   61304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:55.685493   61304 pod_ready.go:92] pod "etcd-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:55.685521   61304 pod_ready.go:81] duration metric: took 402.114974ms for pod "etcd-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:55.685534   61304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:56.082370   61304 pod_ready.go:92] pod "kube-apiserver-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:56.082401   61304 pod_ready.go:81] duration metric: took 396.858387ms for pod "kube-apiserver-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:56.082414   61304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:56.482131   61304 pod_ready.go:92] pod "kube-controller-manager-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:56.482157   61304 pod_ready.go:81] duration metric: took 399.734186ms for pod "kube-controller-manager-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:56.482171   61304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2brrw" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:56.882736   61304 pod_ready.go:92] pod "kube-proxy-2brrw" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:56.882766   61304 pod_ready.go:81] duration metric: took 400.586597ms for pod "kube-proxy-2brrw" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:56.882778   61304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:57.283776   61304 pod_ready.go:92] pod "kube-scheduler-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:57.283808   61304 pod_ready.go:81] duration metric: took 401.021104ms for pod "kube-scheduler-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:57.283830   61304 pod_ready.go:38] duration metric: took 2.496067508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:54:57.283851   61304 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:54:57.283937   61304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:54:57.313110   61304 api_server.go:72] duration metric: took 2.778228022s to wait for apiserver process to appear ...
	I0429 19:54:57.313199   61304 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:54:57.313222   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:57.319584   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 200:
	ok
	I0429 19:54:57.320853   61304 api_server.go:141] control plane version: v1.30.0
	I0429 19:54:57.320887   61304 api_server.go:131] duration metric: took 7.677755ms to wait for apiserver health ...
	I0429 19:54:57.320897   61304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:54:57.486026   61304 system_pods.go:59] 6 kube-system pods found
	I0429 19:54:57.486054   61304 system_pods.go:61] "coredns-7db6d8ff4d-lxtq2" [db9d4855-6b30-41d8-b97d-2e8bab9e7135] Running
	I0429 19:54:57.486058   61304 system_pods.go:61] "etcd-pause-467472" [c4fcb8eb-d378-4229-b3d2-bf0d8da6d4a5] Running
	I0429 19:54:57.486062   61304 system_pods.go:61] "kube-apiserver-pause-467472" [52345820-7c48-453b-8b9a-1c837d664ea7] Running
	I0429 19:54:57.486076   61304 system_pods.go:61] "kube-controller-manager-pause-467472" [7cabe046-247c-4cda-83e2-3a34ebf9db66] Running
	I0429 19:54:57.486079   61304 system_pods.go:61] "kube-proxy-2brrw" [dc85d0aa-db2c-4c9a-a318-19fd8634c217] Running
	I0429 19:54:57.486088   61304 system_pods.go:61] "kube-scheduler-pause-467472" [0455fda0-9152-4212-97f5-764a57a328dc] Running
	I0429 19:54:57.486094   61304 system_pods.go:74] duration metric: took 165.190494ms to wait for pod list to return data ...
	I0429 19:54:57.486104   61304 default_sa.go:34] waiting for default service account to be created ...
	I0429 19:54:57.682814   61304 default_sa.go:45] found service account: "default"
	I0429 19:54:57.682847   61304 default_sa.go:55] duration metric: took 196.735363ms for default service account to be created ...
	I0429 19:54:57.682860   61304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 19:54:57.886303   61304 system_pods.go:86] 6 kube-system pods found
	I0429 19:54:57.886342   61304 system_pods.go:89] "coredns-7db6d8ff4d-lxtq2" [db9d4855-6b30-41d8-b97d-2e8bab9e7135] Running
	I0429 19:54:57.886350   61304 system_pods.go:89] "etcd-pause-467472" [c4fcb8eb-d378-4229-b3d2-bf0d8da6d4a5] Running
	I0429 19:54:57.886357   61304 system_pods.go:89] "kube-apiserver-pause-467472" [52345820-7c48-453b-8b9a-1c837d664ea7] Running
	I0429 19:54:57.886364   61304 system_pods.go:89] "kube-controller-manager-pause-467472" [7cabe046-247c-4cda-83e2-3a34ebf9db66] Running
	I0429 19:54:57.886370   61304 system_pods.go:89] "kube-proxy-2brrw" [dc85d0aa-db2c-4c9a-a318-19fd8634c217] Running
	I0429 19:54:57.886377   61304 system_pods.go:89] "kube-scheduler-pause-467472" [0455fda0-9152-4212-97f5-764a57a328dc] Running
	I0429 19:54:57.886387   61304 system_pods.go:126] duration metric: took 203.520155ms to wait for k8s-apps to be running ...
	I0429 19:54:57.886405   61304 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 19:54:57.886470   61304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:54:57.908443   61304 system_svc.go:56] duration metric: took 22.028252ms WaitForService to wait for kubelet
	I0429 19:54:57.908478   61304 kubeadm.go:576] duration metric: took 3.373599308s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:54:57.908502   61304 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:54:58.081494   61304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:54:58.081518   61304 node_conditions.go:123] node cpu capacity is 2
	I0429 19:54:58.081528   61304 node_conditions.go:105] duration metric: took 173.020499ms to run NodePressure ...
	I0429 19:54:58.081538   61304 start.go:240] waiting for startup goroutines ...
	I0429 19:54:58.081545   61304 start.go:245] waiting for cluster config update ...
	I0429 19:54:58.081551   61304 start.go:254] writing updated cluster config ...
	I0429 19:54:58.081823   61304 ssh_runner.go:195] Run: rm -f paused
	I0429 19:54:58.139557   61304 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 19:54:58.141721   61304 out.go:177] * Done! kubectl is now configured to use "pause-467472" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 29 19:54:58 pause-467472 crio[2946]: time="2024-04-29 19:54:58.977434820Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420498977400993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0023b6d1-074c-42d1-bd69-e09e616fef50 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:54:58 pause-467472 crio[2946]: time="2024-04-29 19:54:58.978394850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ab9a1d3-22a8-43ab-84a1-b6e407ea4535 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:54:58 pause-467472 crio[2946]: time="2024-04-29 19:54:58.978468688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ab9a1d3-22a8-43ab-84a1-b6e407ea4535 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:54:58 pause-467472 crio[2946]: time="2024-04-29 19:54:58.978833605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:856a6562f72ccdce03ab5ddb42f18ec74b67d1b65d443d072fbf0f667d53bf75,PodSandboxId:9b118bcbdc20471ab822568594b4ab11daede88d374a96f5258660b8c1610f4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420479841655005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b04ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b041c92e4d095bc7e42cfaaa43da63fb5b59ec8a3ee3a6f384a612eebc5c08,PodSandboxId:2d2d64ea3347856ac8c54fab25e44946bd7f17c312367ca78e85808ea287b825,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714420479825619141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df4a7540f4dacf7548da27f73e85aecb1def304ce306c6ac46e6d3e883bebe8,PodSandboxId:cf63ca870504b7b727de89fa47b3a10e1ab43abe60b6f4bb243c045e5bf4c356,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714420475073492348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annot
ations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34387ffcf4be5242d03080b49b44ff2a9c95713764715d7c363b069cb7724f4a,PodSandboxId:e2135e6c888b708e90978af4b011949e65555a6bfb57b99f967277e9581e91ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714420475044030794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]
string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53997e74a83197f734d0b47f1285cebcec21e80d3d391876c898a3a9d2a3962,PodSandboxId:d1339d9ebd7d4682341a71c7d374fadf70b370b5f83b47d7047b86c820c75ff6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714420475061514516,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49ff880c87d08fbe442888ce45f7e407052b1ba54151444a06eddad58681ce4,PodSandboxId:b371b66423e5b34535b19361eeac285636f92eb985049fdbf0832a861bc623c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714420475030824690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io
.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474,PodSandboxId:ad8368c23007ce3c34e748af9272d03626a3988444f702d2f84aee415a10dbc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420469331756819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b0
4ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127,PodSandboxId:46a3e25f595fad1c3483e560aa411eddbd36327e65a574dc16922083a0732d95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714420468280348279,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03,PodSandboxId:b294341f60acf85f6f0bcd1eb836c817ba1dcec14914d2b2f33cf784b3802be9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714420468506632508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annotations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f,PodSandboxId:f91ce8b98a516b2d87a1402c268d51d766bd07895ae869e83d01f30d36fb4ae7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714420468187572152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878,PodSandboxId:5cdb7927839d9409dccc38594f86fdf5495b261ae455bbd1a176fca1fbcf25cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714420467996458839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98,PodSandboxId:4c0e67e1628662fb8ab7ca25f0a75703c302676fc1e8779a1112914a4a2ee73a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714420467876055127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ab9a1d3-22a8-43ab-84a1-b6e407ea4535 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.041235845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b7b3665-826e-4969-9781-e333aac28b86 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.041345327Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b7b3665-826e-4969-9781-e333aac28b86 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.043425223Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ce70165-b93b-491f-b69f-508a23e960bc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.044044367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420499044004825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ce70165-b93b-491f-b69f-508a23e960bc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.044734262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=867b8b30-35aa-46e9-a88d-23dd1436d034 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.044807153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=867b8b30-35aa-46e9-a88d-23dd1436d034 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.045250805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:856a6562f72ccdce03ab5ddb42f18ec74b67d1b65d443d072fbf0f667d53bf75,PodSandboxId:9b118bcbdc20471ab822568594b4ab11daede88d374a96f5258660b8c1610f4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420479841655005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b04ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b041c92e4d095bc7e42cfaaa43da63fb5b59ec8a3ee3a6f384a612eebc5c08,PodSandboxId:2d2d64ea3347856ac8c54fab25e44946bd7f17c312367ca78e85808ea287b825,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714420479825619141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df4a7540f4dacf7548da27f73e85aecb1def304ce306c6ac46e6d3e883bebe8,PodSandboxId:cf63ca870504b7b727de89fa47b3a10e1ab43abe60b6f4bb243c045e5bf4c356,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714420475073492348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annot
ations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34387ffcf4be5242d03080b49b44ff2a9c95713764715d7c363b069cb7724f4a,PodSandboxId:e2135e6c888b708e90978af4b011949e65555a6bfb57b99f967277e9581e91ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714420475044030794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]
string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53997e74a83197f734d0b47f1285cebcec21e80d3d391876c898a3a9d2a3962,PodSandboxId:d1339d9ebd7d4682341a71c7d374fadf70b370b5f83b47d7047b86c820c75ff6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714420475061514516,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49ff880c87d08fbe442888ce45f7e407052b1ba54151444a06eddad58681ce4,PodSandboxId:b371b66423e5b34535b19361eeac285636f92eb985049fdbf0832a861bc623c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714420475030824690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io
.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474,PodSandboxId:ad8368c23007ce3c34e748af9272d03626a3988444f702d2f84aee415a10dbc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420469331756819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b0
4ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127,PodSandboxId:46a3e25f595fad1c3483e560aa411eddbd36327e65a574dc16922083a0732d95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714420468280348279,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03,PodSandboxId:b294341f60acf85f6f0bcd1eb836c817ba1dcec14914d2b2f33cf784b3802be9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714420468506632508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annotations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f,PodSandboxId:f91ce8b98a516b2d87a1402c268d51d766bd07895ae869e83d01f30d36fb4ae7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714420468187572152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878,PodSandboxId:5cdb7927839d9409dccc38594f86fdf5495b261ae455bbd1a176fca1fbcf25cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714420467996458839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98,PodSandboxId:4c0e67e1628662fb8ab7ca25f0a75703c302676fc1e8779a1112914a4a2ee73a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714420467876055127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=867b8b30-35aa-46e9-a88d-23dd1436d034 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.105267528Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dfe9e0e0-2a55-4a3e-a6ed-8114249733a2 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.105391094Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dfe9e0e0-2a55-4a3e-a6ed-8114249733a2 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.109096728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=42f6af8d-8c35-43ec-b846-f72859f19d90 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.109676271Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420499109630798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=42f6af8d-8c35-43ec-b846-f72859f19d90 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.110826725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b57ab59-d999-4dd8-ac87-1089a0ef575c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.111052996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b57ab59-d999-4dd8-ac87-1089a0ef575c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.111425622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:856a6562f72ccdce03ab5ddb42f18ec74b67d1b65d443d072fbf0f667d53bf75,PodSandboxId:9b118bcbdc20471ab822568594b4ab11daede88d374a96f5258660b8c1610f4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420479841655005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b04ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b041c92e4d095bc7e42cfaaa43da63fb5b59ec8a3ee3a6f384a612eebc5c08,PodSandboxId:2d2d64ea3347856ac8c54fab25e44946bd7f17c312367ca78e85808ea287b825,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714420479825619141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df4a7540f4dacf7548da27f73e85aecb1def304ce306c6ac46e6d3e883bebe8,PodSandboxId:cf63ca870504b7b727de89fa47b3a10e1ab43abe60b6f4bb243c045e5bf4c356,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714420475073492348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annot
ations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34387ffcf4be5242d03080b49b44ff2a9c95713764715d7c363b069cb7724f4a,PodSandboxId:e2135e6c888b708e90978af4b011949e65555a6bfb57b99f967277e9581e91ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714420475044030794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]
string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53997e74a83197f734d0b47f1285cebcec21e80d3d391876c898a3a9d2a3962,PodSandboxId:d1339d9ebd7d4682341a71c7d374fadf70b370b5f83b47d7047b86c820c75ff6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714420475061514516,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49ff880c87d08fbe442888ce45f7e407052b1ba54151444a06eddad58681ce4,PodSandboxId:b371b66423e5b34535b19361eeac285636f92eb985049fdbf0832a861bc623c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714420475030824690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io
.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474,PodSandboxId:ad8368c23007ce3c34e748af9272d03626a3988444f702d2f84aee415a10dbc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420469331756819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b0
4ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127,PodSandboxId:46a3e25f595fad1c3483e560aa411eddbd36327e65a574dc16922083a0732d95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714420468280348279,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03,PodSandboxId:b294341f60acf85f6f0bcd1eb836c817ba1dcec14914d2b2f33cf784b3802be9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714420468506632508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annotations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f,PodSandboxId:f91ce8b98a516b2d87a1402c268d51d766bd07895ae869e83d01f30d36fb4ae7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714420468187572152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878,PodSandboxId:5cdb7927839d9409dccc38594f86fdf5495b261ae455bbd1a176fca1fbcf25cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714420467996458839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98,PodSandboxId:4c0e67e1628662fb8ab7ca25f0a75703c302676fc1e8779a1112914a4a2ee73a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714420467876055127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b57ab59-d999-4dd8-ac87-1089a0ef575c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.176820374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e22a9b49-08bf-4b43-a123-3d1ec56ec03f name=/runtime.v1.RuntimeService/Version
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.177106960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e22a9b49-08bf-4b43-a123-3d1ec56ec03f name=/runtime.v1.RuntimeService/Version
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.179062555Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0a5d903-d99a-4bb1-8f70-95f8d8285067 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.179524790Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420499179492974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0a5d903-d99a-4bb1-8f70-95f8d8285067 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.180595305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8692b0b-340d-4da9-923a-569dae3e922b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.180655115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8692b0b-340d-4da9-923a-569dae3e922b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:54:59 pause-467472 crio[2946]: time="2024-04-29 19:54:59.181326255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:856a6562f72ccdce03ab5ddb42f18ec74b67d1b65d443d072fbf0f667d53bf75,PodSandboxId:9b118bcbdc20471ab822568594b4ab11daede88d374a96f5258660b8c1610f4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420479841655005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b04ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b041c92e4d095bc7e42cfaaa43da63fb5b59ec8a3ee3a6f384a612eebc5c08,PodSandboxId:2d2d64ea3347856ac8c54fab25e44946bd7f17c312367ca78e85808ea287b825,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714420479825619141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df4a7540f4dacf7548da27f73e85aecb1def304ce306c6ac46e6d3e883bebe8,PodSandboxId:cf63ca870504b7b727de89fa47b3a10e1ab43abe60b6f4bb243c045e5bf4c356,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714420475073492348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annot
ations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34387ffcf4be5242d03080b49b44ff2a9c95713764715d7c363b069cb7724f4a,PodSandboxId:e2135e6c888b708e90978af4b011949e65555a6bfb57b99f967277e9581e91ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714420475044030794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]
string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53997e74a83197f734d0b47f1285cebcec21e80d3d391876c898a3a9d2a3962,PodSandboxId:d1339d9ebd7d4682341a71c7d374fadf70b370b5f83b47d7047b86c820c75ff6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714420475061514516,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49ff880c87d08fbe442888ce45f7e407052b1ba54151444a06eddad58681ce4,PodSandboxId:b371b66423e5b34535b19361eeac285636f92eb985049fdbf0832a861bc623c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714420475030824690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io
.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474,PodSandboxId:ad8368c23007ce3c34e748af9272d03626a3988444f702d2f84aee415a10dbc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420469331756819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b0
4ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127,PodSandboxId:46a3e25f595fad1c3483e560aa411eddbd36327e65a574dc16922083a0732d95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714420468280348279,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03,PodSandboxId:b294341f60acf85f6f0bcd1eb836c817ba1dcec14914d2b2f33cf784b3802be9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714420468506632508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annotations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f,PodSandboxId:f91ce8b98a516b2d87a1402c268d51d766bd07895ae869e83d01f30d36fb4ae7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714420468187572152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878,PodSandboxId:5cdb7927839d9409dccc38594f86fdf5495b261ae455bbd1a176fca1fbcf25cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714420467996458839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98,PodSandboxId:4c0e67e1628662fb8ab7ca25f0a75703c302676fc1e8779a1112914a4a2ee73a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714420467876055127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8692b0b-340d-4da9-923a-569dae3e922b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	856a6562f72cc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   9b118bcbdc204       coredns-7db6d8ff4d-lxtq2
	08b041c92e4d0       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   19 seconds ago      Running             kube-proxy                2                   2d2d64ea33478       kube-proxy-2brrw
	3df4a7540f4da       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Running             etcd                      2                   cf63ca870504b       etcd-pause-467472
	e53997e74a831       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   24 seconds ago      Running             kube-controller-manager   2                   d1339d9ebd7d4       kube-controller-manager-pause-467472
	34387ffcf4be5       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   24 seconds ago      Running             kube-apiserver            2                   e2135e6c888b7       kube-apiserver-pause-467472
	f49ff880c87d0       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   24 seconds ago      Running             kube-scheduler            2                   b371b66423e5b       kube-scheduler-pause-467472
	a43a394dbf993       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   30 seconds ago      Exited              coredns                   1                   ad8368c23007c       coredns-7db6d8ff4d-lxtq2
	80ed718bb499c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   31 seconds ago      Exited              etcd                      1                   b294341f60acf       etcd-pause-467472
	cb26043d1744b       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   31 seconds ago      Exited              kube-controller-manager   1                   46a3e25f595fa       kube-controller-manager-pause-467472
	5fa5f157e8331       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   31 seconds ago      Exited              kube-proxy                1                   f91ce8b98a516       kube-proxy-2brrw
	0a877329ca5eb       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   31 seconds ago      Exited              kube-scheduler            1                   5cdb7927839d9       kube-scheduler-pause-467472
	13560af33a9dc       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   31 seconds ago      Exited              kube-apiserver            1                   4c0e67e162866       kube-apiserver-pause-467472
	
	
	==> coredns [856a6562f72ccdce03ab5ddb42f18ec74b67d1b65d443d072fbf0f667d53bf75] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41287 - 6138 "HINFO IN 8308597871302896540.5934827365394592310. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016164531s
	
	
	==> coredns [a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474] <==
	
	
	==> describe nodes <==
	Name:               pause-467472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-467472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=pause-467472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T19_53_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:53:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-467472
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:54:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:54:39 +0000   Mon, 29 Apr 2024 19:53:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:54:39 +0000   Mon, 29 Apr 2024 19:53:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:54:39 +0000   Mon, 29 Apr 2024 19:53:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:54:39 +0000   Mon, 29 Apr 2024 19:53:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.54
	  Hostname:    pause-467472
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 af5f42a8f5094c06bdc81621083e473c
	  System UUID:                af5f42a8-f509-4c06-bdc8-1621083e473c
	  Boot ID:                    9b81afe2-f057-478b-8949-1c6f4d94b8ba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-lxtq2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     62s
	  kube-system                 etcd-pause-467472                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         75s
	  kube-system                 kube-apiserver-pause-467472             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-pause-467472    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-proxy-2brrw                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-pause-467472             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 61s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 82s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node pause-467472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node pause-467472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node pause-467472 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    75s                kubelet          Node pause-467472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  75s                kubelet          Node pause-467472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     75s                kubelet          Node pause-467472 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                74s                kubelet          Node pause-467472 status is now: NodeReady
	  Normal  RegisteredNode           63s                node-controller  Node pause-467472 event: Registered Node pause-467472 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-467472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-467472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-467472 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-467472 event: Registered Node pause-467472 in Controller
	
	
	==> dmesg <==
	[  +0.062656] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067621] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.226113] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.140985] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.368236] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +5.097682] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.062955] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.176716] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +1.043992] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.024170] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.092580] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.106205] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.889035] systemd-fstab-generator[1527]: Ignoring "noauto" option for root device
	[Apr29 19:54] kauditd_printk_skb: 98 callbacks suppressed
	[  +0.352053] systemd-fstab-generator[2470]: Ignoring "noauto" option for root device
	[  +0.470546] systemd-fstab-generator[2643]: Ignoring "noauto" option for root device
	[  +0.611996] systemd-fstab-generator[2787]: Ignoring "noauto" option for root device
	[  +0.235917] systemd-fstab-generator[2814]: Ignoring "noauto" option for root device
	[  +0.600005] systemd-fstab-generator[2911]: Ignoring "noauto" option for root device
	[  +1.996961] systemd-fstab-generator[3512]: Ignoring "noauto" option for root device
	[  +2.635077] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[  +0.087594] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.578161] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.927849] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.750540] systemd-fstab-generator[4074]: Ignoring "noauto" option for root device
	
	
	==> etcd [3df4a7540f4dacf7548da27f73e85aecb1def304ce306c6ac46e6d3e883bebe8] <==
	{"level":"info","ts":"2024-04-29T19:54:35.714605Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","added-peer-id":"b0a6bbe4c9ddfbc1","added-peer-peer-urls":["https://192.168.50.54:2380"]}
	{"level":"info","ts":"2024-04-29T19:54:35.714833Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:54:35.714878Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:54:35.719361Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T19:54:35.719687Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b0a6bbe4c9ddfbc1","initial-advertise-peer-urls":["https://192.168.50.54:2380"],"listen-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:54:35.719722Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T19:54:35.719834Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-04-29T19:54:35.719844Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-04-29T19:54:37.507741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T19:54:37.507842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T19:54:37.508032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgPreVoteResp from b0a6bbe4c9ddfbc1 at term 2"}
	{"level":"info","ts":"2024-04-29T19:54:37.508076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T19:54:37.508101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgVoteResp from b0a6bbe4c9ddfbc1 at term 3"}
	{"level":"info","ts":"2024-04-29T19:54:37.508136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became leader at term 3"}
	{"level":"info","ts":"2024-04-29T19:54:37.508163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0a6bbe4c9ddfbc1 elected leader b0a6bbe4c9ddfbc1 at term 3"}
	{"level":"info","ts":"2024-04-29T19:54:37.514773Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b0a6bbe4c9ddfbc1","local-member-attributes":"{Name:pause-467472 ClientURLs:[https://192.168.50.54:2379]}","request-path":"/0/members/b0a6bbe4c9ddfbc1/attributes","cluster-id":"b7dc4198fc8444d0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:54:37.51481Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:54:37.515329Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T19:54:37.515385Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T19:54:37.514842Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:54:37.517244Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T19:54:37.518204Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.54:2379"}
	{"level":"warn","ts":"2024-04-29T19:54:59.744239Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.12829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T19:54:59.744692Z","caller":"traceutil/trace.go:171","msg":"trace[1065081412] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:506; }","duration":"135.636915ms","start":"2024-04-29T19:54:59.609022Z","end":"2024-04-29T19:54:59.744659Z","steps":["trace[1065081412] 'range keys from in-memory index tree'  (duration: 135.034941ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T19:55:00.623861Z","caller":"traceutil/trace.go:171","msg":"trace[952986955] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"288.035252ms","start":"2024-04-29T19:55:00.335799Z","end":"2024-04-29T19:55:00.623835Z","steps":["trace[952986955] 'process raft request'  (duration: 243.806479ms)","trace[952986955] 'compare'  (duration: 43.916865ms)"],"step_count":2}
	
	
	==> etcd [80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03] <==
	{"level":"info","ts":"2024-04-29T19:54:29.267592Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"103.777435ms"}
	{"level":"info","ts":"2024-04-29T19:54:29.315075Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-29T19:54:29.406314Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","commit-index":445}
	{"level":"info","ts":"2024-04-29T19:54:29.406544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-29T19:54:29.40659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became follower at term 2"}
	{"level":"info","ts":"2024-04-29T19:54:29.406614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b0a6bbe4c9ddfbc1 [peers: [], term: 2, commit: 445, applied: 0, lastindex: 445, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-29T19:54:29.417287Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-29T19:54:29.469698Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":427}
	{"level":"info","ts":"2024-04-29T19:54:29.476518Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-29T19:54:29.49124Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b0a6bbe4c9ddfbc1","timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:54:29.495035Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b0a6bbe4c9ddfbc1"}
	{"level":"info","ts":"2024-04-29T19:54:29.495839Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b0a6bbe4c9ddfbc1","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-29T19:54:29.510497Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-29T19:54:29.510826Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:54:29.51104Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:54:29.511087Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:54:29.511477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 switched to configuration voters=(12729067988122991553)"}
	{"level":"info","ts":"2024-04-29T19:54:29.512496Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","added-peer-id":"b0a6bbe4c9ddfbc1","added-peer-peer-urls":["https://192.168.50.54:2380"]}
	{"level":"info","ts":"2024-04-29T19:54:29.537656Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:54:29.537814Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:54:29.581106Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-04-29T19:54:29.581148Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-04-29T19:54:29.58123Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T19:54:29.58159Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b0a6bbe4c9ddfbc1","initial-advertise-peer-urls":["https://192.168.50.54:2380"],"listen-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:54:29.581621Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 19:55:02 up 1 min,  0 users,  load average: 1.25, 0.52, 0.19
	Linux pause-467472 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98] <==
	I0429 19:54:28.576636       1 options.go:221] external host was not specified, using 192.168.50.54
	I0429 19:54:28.580875       1 server.go:148] Version: v1.30.0
	I0429 19:54:28.581027       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [34387ffcf4be5242d03080b49b44ff2a9c95713764715d7c363b069cb7724f4a] <==
	I0429 19:54:38.904200       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 19:54:38.905465       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 19:54:38.905548       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 19:54:38.905556       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 19:54:38.906286       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 19:54:38.906716       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 19:54:38.917811       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 19:54:38.933552       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 19:54:38.933626       1 policy_source.go:224] refreshing policies
	I0429 19:54:38.936751       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 19:54:38.948881       1 aggregator.go:165] initial CRD sync complete...
	I0429 19:54:38.949087       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 19:54:38.949121       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 19:54:38.949146       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:54:38.949561       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:54:38.949888       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0429 19:54:38.987320       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0429 19:54:39.810716       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 19:54:40.799346       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 19:54:40.811285       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 19:54:40.866236       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 19:54:40.906866       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 19:54:40.931366       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 19:54:51.666546       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 19:54:51.820332       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127] <==
	
	
	==> kube-controller-manager [e53997e74a83197f734d0b47f1285cebcec21e80d3d391876c898a3a9d2a3962] <==
	I0429 19:54:51.693391       1 shared_informer.go:320] Caches are synced for stateful set
	I0429 19:54:51.704343       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0429 19:54:51.727206       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"pause-467472\" does not exist"
	I0429 19:54:51.750331       1 shared_informer.go:320] Caches are synced for TTL
	I0429 19:54:51.762791       1 shared_informer.go:320] Caches are synced for GC
	I0429 19:54:51.768193       1 shared_informer.go:320] Caches are synced for node
	I0429 19:54:51.768259       1 shared_informer.go:320] Caches are synced for disruption
	I0429 19:54:51.768299       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0429 19:54:51.768320       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0429 19:54:51.768350       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0429 19:54:51.768356       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0429 19:54:51.774738       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 19:54:51.788879       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 19:54:51.799146       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0429 19:54:51.799761       1 shared_informer.go:320] Caches are synced for daemon sets
	I0429 19:54:51.803382       1 shared_informer.go:320] Caches are synced for taint
	I0429 19:54:51.803798       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0429 19:54:51.804097       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-467472"
	I0429 19:54:51.804268       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0429 19:54:51.812945       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 19:54:51.822715       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0429 19:54:51.825500       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 19:54:52.243380       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:54:52.243504       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 19:54:52.282047       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [08b041c92e4d095bc7e42cfaaa43da63fb5b59ec8a3ee3a6f384a612eebc5c08] <==
	I0429 19:54:40.058216       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:54:40.086441       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.54"]
	I0429 19:54:40.172855       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:54:40.173024       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:54:40.173056       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:54:40.176742       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:54:40.176999       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:54:40.177044       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:54:40.178354       1 config.go:192] "Starting service config controller"
	I0429 19:54:40.178402       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:54:40.178428       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:54:40.178431       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:54:40.178864       1 config.go:319] "Starting node config controller"
	I0429 19:54:40.179003       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:54:40.278818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:54:40.278993       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:54:40.279062       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f] <==
	
	
	==> kube-scheduler [0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878] <==
	
	
	==> kube-scheduler [f49ff880c87d08fbe442888ce45f7e407052b1ba54151444a06eddad58681ce4] <==
	I0429 19:54:36.422358       1 serving.go:380] Generated self-signed cert in-memory
	W0429 19:54:38.845512       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 19:54:38.845568       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 19:54:38.845579       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 19:54:38.845589       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 19:54:38.934856       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 19:54:38.935012       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:54:38.953513       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 19:54:38.953647       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 19:54:38.953085       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 19:54:38.958136       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 19:54:39.059221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 19:54:35 pause-467472 kubelet[3643]: I0429 19:54:35.011565    3643 scope.go:117] "RemoveContainer" containerID="cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: I0429 19:54:35.011991    3643 scope.go:117] "RemoveContainer" containerID="0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: I0429 19:54:35.012200    3643 scope.go:117] "RemoveContainer" containerID="80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: I0429 19:54:35.015479    3643 scope.go:117] "RemoveContainer" containerID="13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: E0429 19:54:35.115779    3643 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-467472?timeout=10s\": dial tcp 192.168.50.54:8443: connect: connection refused" interval="800ms"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: I0429 19:54:35.225583    3643 kubelet_node_status.go:73] "Attempting to register node" node="pause-467472"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: E0429 19:54:35.226409    3643 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.54:8443: connect: connection refused" node="pause-467472"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: W0429 19:54:35.298454    3643 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-467472&limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 29 19:54:35 pause-467472 kubelet[3643]: E0429 19:54:35.298549    3643 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-467472&limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 29 19:54:35 pause-467472 kubelet[3643]: W0429 19:54:35.475863    3643 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 29 19:54:35 pause-467472 kubelet[3643]: E0429 19:54:35.476020    3643 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 29 19:54:36 pause-467472 kubelet[3643]: I0429 19:54:36.028581    3643 kubelet_node_status.go:73] "Attempting to register node" node="pause-467472"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.018649    3643 kubelet_node_status.go:112] "Node was previously registered" node="pause-467472"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.019139    3643 kubelet_node_status.go:76] "Successfully registered node" node="pause-467472"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.020850    3643 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.022075    3643 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.488606    3643 apiserver.go:52] "Watching apiserver"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.492186    3643 topology_manager.go:215] "Topology Admit Handler" podUID="db9d4855-6b30-41d8-b97d-2e8bab9e7135" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lxtq2"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.492393    3643 topology_manager.go:215] "Topology Admit Handler" podUID="dc85d0aa-db2c-4c9a-a318-19fd8634c217" podNamespace="kube-system" podName="kube-proxy-2brrw"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.503773    3643 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.541836    3643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc85d0aa-db2c-4c9a-a318-19fd8634c217-lib-modules\") pod \"kube-proxy-2brrw\" (UID: \"dc85d0aa-db2c-4c9a-a318-19fd8634c217\") " pod="kube-system/kube-proxy-2brrw"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.542043    3643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc85d0aa-db2c-4c9a-a318-19fd8634c217-xtables-lock\") pod \"kube-proxy-2brrw\" (UID: \"dc85d0aa-db2c-4c9a-a318-19fd8634c217\") " pod="kube-system/kube-proxy-2brrw"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.793161    3643 scope.go:117] "RemoveContainer" containerID="5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.793441    3643 scope.go:117] "RemoveContainer" containerID="a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474"
	Apr 29 19:54:48 pause-467472 kubelet[3643]: I0429 19:54:48.139263    3643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-467472 -n pause-467472
helpers_test.go:261: (dbg) Run:  kubectl --context pause-467472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-467472 -n pause-467472
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-467472 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-467472 logs -n 25: (1.768027463s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo docker                         | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo cat                            | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo                                | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo find                           | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-870155 sudo crio                           | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-870155                                     | cilium-870155             | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC | 29 Apr 24 19:54 UTC |
	| start   | -p pause-467472                                      | pause-467472              | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC | 29 Apr 24 19:54 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-407092                            | running-upgrade-407092    | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC | 29 Apr 24 19:54 UTC |
	| start   | -p cert-expiration-509508                            | cert-expiration-509508    | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-090341                         | force-systemd-flag-090341 | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-935578                         | kubernetes-upgrade-935578 | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-935578                         | kubernetes-upgrade-935578 | jenkins | v1.33.0 | 29 Apr 24 19:54 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 19:54:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 19:54:36.637836   61801 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:54:36.637995   61801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:54:36.638007   61801 out.go:304] Setting ErrFile to fd 2...
	I0429 19:54:36.638023   61801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:54:36.638322   61801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:54:36.639012   61801 out.go:298] Setting JSON to false
	I0429 19:54:36.640292   61801 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5775,"bootTime":1714414702,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 19:54:36.640374   61801 start.go:139] virtualization: kvm guest
	I0429 19:54:36.642541   61801 out.go:177] * [kubernetes-upgrade-935578] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 19:54:36.644283   61801 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:54:36.644334   61801 notify.go:220] Checking for updates...
	I0429 19:54:36.645679   61801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:54:36.646956   61801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:54:36.648216   61801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:54:36.649484   61801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 19:54:36.650701   61801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:54:36.652416   61801 config.go:182] Loaded profile config "kubernetes-upgrade-935578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:54:36.652976   61801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:54:36.653037   61801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:54:36.668570   61801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
	I0429 19:54:36.669018   61801 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:54:36.669690   61801 main.go:141] libmachine: Using API Version  1
	I0429 19:54:36.669752   61801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:54:36.670111   61801 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:54:36.670310   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:54:36.670570   61801 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:54:36.670885   61801 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:54:36.670924   61801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:54:36.686757   61801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I0429 19:54:36.687200   61801 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:54:36.687706   61801 main.go:141] libmachine: Using API Version  1
	I0429 19:54:36.687739   61801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:54:36.688118   61801 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:54:36.688420   61801 main.go:141] libmachine: (kubernetes-upgrade-935578) Calling .DriverName
	I0429 19:54:36.731472   61801 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 19:54:36.732931   61801 start.go:297] selected driver: kvm2
	I0429 19:54:36.732953   61801 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-935578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-935578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:54:36.733112   61801 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:54:36.734166   61801 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:54:36.734266   61801 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 19:54:36.752334   61801 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 19:54:36.752714   61801 cni.go:84] Creating CNI manager for ""
	I0429 19:54:36.752733   61801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:54:36.752790   61801 start.go:340] cluster config:
	{Name:kubernetes-upgrade-935578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-935578 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:54:36.752906   61801 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:54:36.755612   61801 out.go:177] * Starting "kubernetes-upgrade-935578" primary control-plane node in "kubernetes-upgrade-935578" cluster
	I0429 19:54:34.118158   61304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.242227495s)
	I0429 19:54:34.118195   61304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:54:34.364403   61304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:54:34.446089   61304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:54:34.565464   61304 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:54:34.565554   61304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:54:35.066291   61304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:54:35.565747   61304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:54:35.584006   61304 api_server.go:72] duration metric: took 1.018541203s to wait for apiserver process to appear ...
	I0429 19:54:35.584037   61304 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:54:35.584059   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:35.733654   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:35.734418   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | unable to find current IP address of domain cert-expiration-509508 in network mk-cert-expiration-509508
	I0429 19:54:35.734435   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | I0429 19:54:35.734232   61621 retry.go:31] will retry after 2.119212845s: waiting for machine to come up
	I0429 19:54:37.855993   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:37.856491   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | unable to find current IP address of domain cert-expiration-509508 in network mk-cert-expiration-509508
	I0429 19:54:37.856513   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | I0429 19:54:37.856452   61621 retry.go:31] will retry after 2.524229713s: waiting for machine to come up
	I0429 19:54:38.841293   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 19:54:38.841328   61304 api_server.go:103] status: https://192.168.50.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 19:54:38.841343   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:38.875266   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 19:54:38.875302   61304 api_server.go:103] status: https://192.168.50.54:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 19:54:39.084855   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:39.090284   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:54:39.090320   61304 api_server.go:103] status: https://192.168.50.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:54:39.584948   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:39.596854   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:54:39.596885   61304 api_server.go:103] status: https://192.168.50.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:54:40.085087   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:40.094581   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 19:54:40.094613   61304 api_server.go:103] status: https://192.168.50.54:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 19:54:40.584231   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:40.588861   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 200:
	ok
	I0429 19:54:40.596661   61304 api_server.go:141] control plane version: v1.30.0
	I0429 19:54:40.596695   61304 api_server.go:131] duration metric: took 5.012650451s to wait for apiserver health ...
	I0429 19:54:40.596707   61304 cni.go:84] Creating CNI manager for ""
	I0429 19:54:40.596715   61304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:54:40.598645   61304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 19:54:36.756824   61801 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:54:36.756873   61801 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 19:54:36.756889   61801 cache.go:56] Caching tarball of preloaded images
	I0429 19:54:36.757002   61801 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 19:54:36.757018   61801 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 19:54:36.757139   61801 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/kubernetes-upgrade-935578/config.json ...
	I0429 19:54:36.757409   61801 start.go:360] acquireMachinesLock for kubernetes-upgrade-935578: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:54:40.600192   61304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 19:54:40.615497   61304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 19:54:40.638015   61304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:54:40.650678   61304 system_pods.go:59] 6 kube-system pods found
	I0429 19:54:40.650733   61304 system_pods.go:61] "coredns-7db6d8ff4d-lxtq2" [db9d4855-6b30-41d8-b97d-2e8bab9e7135] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 19:54:40.650747   61304 system_pods.go:61] "etcd-pause-467472" [c4fcb8eb-d378-4229-b3d2-bf0d8da6d4a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 19:54:40.650759   61304 system_pods.go:61] "kube-apiserver-pause-467472" [52345820-7c48-453b-8b9a-1c837d664ea7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 19:54:40.650771   61304 system_pods.go:61] "kube-controller-manager-pause-467472" [7cabe046-247c-4cda-83e2-3a34ebf9db66] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 19:54:40.650781   61304 system_pods.go:61] "kube-proxy-2brrw" [dc85d0aa-db2c-4c9a-a318-19fd8634c217] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 19:54:40.650788   61304 system_pods.go:61] "kube-scheduler-pause-467472" [0455fda0-9152-4212-97f5-764a57a328dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 19:54:40.650808   61304 system_pods.go:74] duration metric: took 12.76494ms to wait for pod list to return data ...
	I0429 19:54:40.650832   61304 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:54:40.654908   61304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:54:40.654948   61304 node_conditions.go:123] node cpu capacity is 2
	I0429 19:54:40.654963   61304 node_conditions.go:105] duration metric: took 4.119961ms to run NodePressure ...
	I0429 19:54:40.654986   61304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 19:54:40.952883   61304 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 19:54:40.959638   61304 kubeadm.go:733] kubelet initialised
	I0429 19:54:40.959670   61304 kubeadm.go:734] duration metric: took 6.753272ms waiting for restarted kubelet to initialise ...
	I0429 19:54:40.959679   61304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:54:40.968789   61304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:42.975963   61304 pod_ready.go:102] pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace has status "Ready":"False"
	I0429 19:54:40.384100   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:40.384625   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | unable to find current IP address of domain cert-expiration-509508 in network mk-cert-expiration-509508
	I0429 19:54:40.384648   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | I0429 19:54:40.384586   61621 retry.go:31] will retry after 2.83087137s: waiting for machine to come up
	I0429 19:54:43.216864   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:43.217395   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | unable to find current IP address of domain cert-expiration-509508 in network mk-cert-expiration-509508
	I0429 19:54:43.217410   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | I0429 19:54:43.217319   61621 retry.go:31] will retry after 2.889221716s: waiting for machine to come up
	I0429 19:54:45.477042   61304 pod_ready.go:102] pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace has status "Ready":"False"
	I0429 19:54:47.976991   61304 pod_ready.go:102] pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace has status "Ready":"False"
	I0429 19:54:46.110459   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:46.110948   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | unable to find current IP address of domain cert-expiration-509508 in network mk-cert-expiration-509508
	I0429 19:54:46.110970   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | I0429 19:54:46.110900   61621 retry.go:31] will retry after 5.231259953s: waiting for machine to come up
	I0429 19:54:52.939537   61545 start.go:364] duration metric: took 33.088858515s to acquireMachinesLock for "force-systemd-flag-090341"
	I0429 19:54:52.939599   61545 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-090341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:force-systemd-flag-090341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:54:52.941175   61545 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 19:54:48.976580   61304 pod_ready.go:92] pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:48.976605   61304 pod_ready.go:81] duration metric: took 8.007783761s for pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:48.976614   61304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:50.983258   61304 pod_ready.go:102] pod "etcd-pause-467472" in "kube-system" namespace has status "Ready":"False"
	I0429 19:54:52.987271   61304 pod_ready.go:102] pod "etcd-pause-467472" in "kube-system" namespace has status "Ready":"False"
	I0429 19:54:51.343596   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.344112   61469 main.go:141] libmachine: (cert-expiration-509508) Found IP for machine: 192.168.61.227
	I0429 19:54:51.344125   61469 main.go:141] libmachine: (cert-expiration-509508) Reserving static IP address...
	I0429 19:54:51.344133   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has current primary IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.344503   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | unable to find host DHCP lease matching {name: "cert-expiration-509508", mac: "52:54:00:a6:1a:b3", ip: "192.168.61.227"} in network mk-cert-expiration-509508
	I0429 19:54:51.417592   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | Getting to WaitForSSH function...
	I0429 19:54:51.417619   61469 main.go:141] libmachine: (cert-expiration-509508) Reserved static IP address: 192.168.61.227
	I0429 19:54:51.417633   61469 main.go:141] libmachine: (cert-expiration-509508) Waiting for SSH to be available...
	I0429 19:54:51.420434   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.420912   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:51.420970   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.421090   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | Using SSH client type: external
	I0429 19:54:51.421106   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/cert-expiration-509508/id_rsa (-rw-------)
	I0429 19:54:51.421135   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/cert-expiration-509508/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:54:51.421143   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | About to run SSH command:
	I0429 19:54:51.421153   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | exit 0
	I0429 19:54:51.547466   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | SSH cmd err, output: <nil>: 
	I0429 19:54:51.547730   61469 main.go:141] libmachine: (cert-expiration-509508) KVM machine creation complete!
	I0429 19:54:51.548002   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetConfigRaw
	I0429 19:54:51.548683   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:51.548888   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:51.549111   61469 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 19:54:51.549121   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetState
	I0429 19:54:51.550594   61469 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 19:54:51.550604   61469 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 19:54:51.550610   61469 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 19:54:51.550618   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:51.553389   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.553713   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:51.553740   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.553865   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:51.554042   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.554210   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.554366   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:51.554532   61469 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:51.554728   61469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0429 19:54:51.554733   61469 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 19:54:51.658578   61469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:54:51.658589   61469 main.go:141] libmachine: Detecting the provisioner...
	I0429 19:54:51.658595   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:51.661685   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.662148   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:51.662168   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.662397   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:51.662625   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.662806   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.662991   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:51.663147   61469 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:51.663353   61469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0429 19:54:51.663362   61469 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 19:54:51.767557   61469 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 19:54:51.767648   61469 main.go:141] libmachine: found compatible host: buildroot
	I0429 19:54:51.767656   61469 main.go:141] libmachine: Provisioning with buildroot...
	I0429 19:54:51.767667   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetMachineName
	I0429 19:54:51.767932   61469 buildroot.go:166] provisioning hostname "cert-expiration-509508"
	I0429 19:54:51.767949   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetMachineName
	I0429 19:54:51.768153   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:51.770922   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.771314   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:51.771335   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.771466   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:51.771642   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.771792   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.771917   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:51.772050   61469 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:51.772221   61469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0429 19:54:51.772227   61469 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-509508 && echo "cert-expiration-509508" | sudo tee /etc/hostname
	I0429 19:54:51.891046   61469 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-509508
	
	I0429 19:54:51.891064   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:51.893645   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.894045   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:51.894099   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:51.894276   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:51.894484   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.894603   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:51.894751   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:51.894891   61469 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:51.895055   61469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0429 19:54:51.895068   61469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-509508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-509508/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-509508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:54:52.008825   61469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:54:52.008841   61469 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:54:52.008868   61469 buildroot.go:174] setting up certificates
	I0429 19:54:52.008881   61469 provision.go:84] configureAuth start
	I0429 19:54:52.008892   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetMachineName
	I0429 19:54:52.009180   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetIP
	I0429 19:54:52.011847   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.012204   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.012220   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.012370   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.014837   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.015147   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.015163   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.015304   61469 provision.go:143] copyHostCerts
	I0429 19:54:52.015366   61469 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:54:52.015373   61469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:54:52.015440   61469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:54:52.015607   61469 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:54:52.015613   61469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:54:52.015645   61469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:54:52.015735   61469 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:54:52.015740   61469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:54:52.015763   61469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:54:52.015841   61469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-509508 san=[127.0.0.1 192.168.61.227 cert-expiration-509508 localhost minikube]
	I0429 19:54:52.214998   61469 provision.go:177] copyRemoteCerts
	I0429 19:54:52.215051   61469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:54:52.215071   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.217776   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.218120   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.218147   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.218319   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:52.218487   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.218626   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:52.218754   61469 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/cert-expiration-509508/id_rsa Username:docker}
	I0429 19:54:52.303472   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:54:52.331964   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0429 19:54:52.359334   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 19:54:52.387088   61469 provision.go:87] duration metric: took 378.197066ms to configureAuth
	I0429 19:54:52.387106   61469 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:54:52.387277   61469 config.go:182] Loaded profile config "cert-expiration-509508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:54:52.387355   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.390131   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.390515   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.390538   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.390716   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:52.390903   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.391056   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.391164   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:52.391330   61469 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:52.391484   61469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0429 19:54:52.391493   61469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:54:52.692586   61469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:54:52.692600   61469 main.go:141] libmachine: Checking connection to Docker...
	I0429 19:54:52.692610   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetURL
	I0429 19:54:52.694022   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | Using libvirt version 6000000
	I0429 19:54:52.696314   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.696594   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.696618   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.696849   61469 main.go:141] libmachine: Docker is up and running!
	I0429 19:54:52.696857   61469 main.go:141] libmachine: Reticulating splines...
	I0429 19:54:52.696862   61469 client.go:171] duration metric: took 25.591340389s to LocalClient.Create
	I0429 19:54:52.696884   61469 start.go:167] duration metric: took 25.591405786s to libmachine.API.Create "cert-expiration-509508"
	I0429 19:54:52.696891   61469 start.go:293] postStartSetup for "cert-expiration-509508" (driver="kvm2")
	I0429 19:54:52.696904   61469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:54:52.696922   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:52.697162   61469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:54:52.697179   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.700011   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.700388   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.700411   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.700542   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:52.700728   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.700871   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:52.701026   61469 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/cert-expiration-509508/id_rsa Username:docker}
	I0429 19:54:52.782704   61469 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:54:52.787995   61469 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:54:52.788012   61469 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:54:52.788091   61469 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:54:52.788190   61469 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:54:52.788311   61469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:54:52.800390   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:54:52.830933   61469 start.go:296] duration metric: took 134.029108ms for postStartSetup
	I0429 19:54:52.830978   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetConfigRaw
	I0429 19:54:52.831724   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetIP
	I0429 19:54:52.834527   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.834913   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.834930   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.835213   61469 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/config.json ...
	I0429 19:54:52.835402   61469 start.go:128] duration metric: took 25.754630638s to createHost
	I0429 19:54:52.835421   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.837896   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.838281   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.838300   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.838431   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:52.838600   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.838770   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.838941   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:52.839169   61469 main.go:141] libmachine: Using SSH client type: native
	I0429 19:54:52.839328   61469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.227 22 <nil> <nil>}
	I0429 19:54:52.839335   61469 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 19:54:52.939414   61469 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714420492.919345164
	
	I0429 19:54:52.939426   61469 fix.go:216] guest clock: 1714420492.919345164
	I0429 19:54:52.939434   61469 fix.go:229] Guest: 2024-04-29 19:54:52.919345164 +0000 UTC Remote: 2024-04-29 19:54:52.835408361 +0000 UTC m=+43.382949359 (delta=83.936803ms)
	I0429 19:54:52.939457   61469 fix.go:200] guest clock delta is within tolerance: 83.936803ms
	I0429 19:54:52.939463   61469 start.go:83] releasing machines lock for "cert-expiration-509508", held for 25.858854881s
	I0429 19:54:52.939489   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:52.939784   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetIP
	I0429 19:54:52.943449   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.943823   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.943844   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.944021   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:52.944532   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:52.944704   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .DriverName
	I0429 19:54:52.944799   61469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:54:52.944832   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.944905   61469 ssh_runner.go:195] Run: cat /version.json
	I0429 19:54:52.944917   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHHostname
	I0429 19:54:52.948388   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.948675   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.948884   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.948897   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.949085   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:52.949109   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:52.949128   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:52.949240   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.949348   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHPort
	I0429 19:54:52.949413   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:52.949528   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHKeyPath
	I0429 19:54:52.949601   61469 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/cert-expiration-509508/id_rsa Username:docker}
	I0429 19:54:52.949887   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetSSHUsername
	I0429 19:54:52.950035   61469 sshutil.go:53] new ssh client: &{IP:192.168.61.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/cert-expiration-509508/id_rsa Username:docker}
	I0429 19:54:53.059030   61469 ssh_runner.go:195] Run: systemctl --version
	I0429 19:54:53.067582   61469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:54:53.245086   61469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:54:53.253409   61469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:54:53.253484   61469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:54:53.272160   61469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:54:53.272177   61469 start.go:494] detecting cgroup driver to use...
	I0429 19:54:53.272259   61469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:54:53.292874   61469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:54:53.309571   61469 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:54:53.309631   61469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:54:53.329917   61469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:54:53.350168   61469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:54:53.491686   61469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:54:53.652022   61469 docker.go:233] disabling docker service ...
	I0429 19:54:53.652087   61469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:54:53.670214   61469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:54:53.686954   61469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:54:53.843358   61469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:54:53.997569   61469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:54:54.016046   61469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:54:54.042400   61469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 19:54:54.042453   61469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.054457   61469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:54:54.054515   61469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.067690   61469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.079318   61469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.091529   61469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:54:54.103318   61469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.115430   61469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.135219   61469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:54:54.147282   61469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:54:54.157795   61469 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 19:54:54.157860   61469 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 19:54:54.173866   61469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:54:54.187698   61469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:54:54.346354   61469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:54:54.493804   61304 pod_ready.go:92] pod "etcd-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:54.493838   61304 pod_ready.go:81] duration metric: took 5.517216695s for pod "etcd-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.493852   61304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.500971   61304 pod_ready.go:92] pod "kube-apiserver-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:54.500999   61304 pod_ready.go:81] duration metric: took 7.138665ms for pod "kube-apiserver-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.501012   61304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.506242   61304 pod_ready.go:92] pod "kube-controller-manager-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:54.506268   61304 pod_ready.go:81] duration metric: took 5.247358ms for pod "kube-controller-manager-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.506280   61304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2brrw" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.512205   61304 pod_ready.go:92] pod "kube-proxy-2brrw" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:54.512224   61304 pod_ready.go:81] duration metric: took 5.935782ms for pod "kube-proxy-2brrw" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.512234   61304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.518188   61304 pod_ready.go:92] pod "kube-scheduler-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:54.518212   61304 pod_ready.go:81] duration metric: took 5.97113ms for pod "kube-scheduler-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:54.518221   61304 pod_ready.go:38] duration metric: took 13.558530768s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:54:54.518241   61304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 19:54:54.533450   61304 ops.go:34] apiserver oom_adj: -16
	I0429 19:54:54.533469   61304 kubeadm.go:591] duration metric: took 22.073541311s to restartPrimaryControlPlane
	I0429 19:54:54.533479   61304 kubeadm.go:393] duration metric: took 22.176881709s to StartCluster
	I0429 19:54:54.533496   61304 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:54:54.533573   61304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:54:54.534577   61304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:54:54.534848   61304 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:54:54.537639   61304 out.go:177] * Verifying Kubernetes components...
	I0429 19:54:54.534980   61304 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 19:54:54.535174   61304 config.go:182] Loaded profile config "pause-467472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:54:54.539254   61304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:54:54.540605   61304 out.go:177] * Enabled addons: 
	I0429 19:54:54.517908   61469 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:54:54.517983   61469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:54:54.524804   61469 start.go:562] Will wait 60s for crictl version
	I0429 19:54:54.524872   61469 ssh_runner.go:195] Run: which crictl
	I0429 19:54:54.531801   61469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:54:54.586173   61469 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:54:54.586253   61469 ssh_runner.go:195] Run: crio --version
	I0429 19:54:54.632689   61469 ssh_runner.go:195] Run: crio --version
	I0429 19:54:54.674233   61469 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 19:54:52.943005   61545 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0429 19:54:52.943201   61545 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:54:52.943236   61545 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:54:52.960643   61545 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45587
	I0429 19:54:52.961063   61545 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:54:52.961595   61545 main.go:141] libmachine: Using API Version  1
	I0429 19:54:52.961615   61545 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:54:52.962013   61545 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:54:52.962272   61545 main.go:141] libmachine: (force-systemd-flag-090341) Calling .GetMachineName
	I0429 19:54:52.962453   61545 main.go:141] libmachine: (force-systemd-flag-090341) Calling .DriverName
	I0429 19:54:52.962614   61545 start.go:159] libmachine.API.Create for "force-systemd-flag-090341" (driver="kvm2")
	I0429 19:54:52.962641   61545 client.go:168] LocalClient.Create starting
	I0429 19:54:52.962676   61545 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 19:54:52.962716   61545 main.go:141] libmachine: Decoding PEM data...
	I0429 19:54:52.962737   61545 main.go:141] libmachine: Parsing certificate...
	I0429 19:54:52.962815   61545 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 19:54:52.962849   61545 main.go:141] libmachine: Decoding PEM data...
	I0429 19:54:52.962867   61545 main.go:141] libmachine: Parsing certificate...
	I0429 19:54:52.962893   61545 main.go:141] libmachine: Running pre-create checks...
	I0429 19:54:52.962906   61545 main.go:141] libmachine: (force-systemd-flag-090341) Calling .PreCreateCheck
	I0429 19:54:52.963306   61545 main.go:141] libmachine: (force-systemd-flag-090341) Calling .GetConfigRaw
	I0429 19:54:52.963740   61545 main.go:141] libmachine: Creating machine...
	I0429 19:54:52.963756   61545 main.go:141] libmachine: (force-systemd-flag-090341) Calling .Create
	I0429 19:54:52.963890   61545 main.go:141] libmachine: (force-systemd-flag-090341) Creating KVM machine...
	I0429 19:54:52.965039   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | found existing default KVM network
	I0429 19:54:52.966216   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:52.966008   61938 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:31:bc:7c} reservation:<nil>}
	I0429 19:54:52.967036   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:52.966954   61938 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e5:37:4d} reservation:<nil>}
	I0429 19:54:52.968134   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:52.968045   61938 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:99:a1:58} reservation:<nil>}
	I0429 19:54:52.969494   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:52.969417   61938 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002899b0}
	I0429 19:54:52.969540   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | created network xml: 
	I0429 19:54:52.969564   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | <network>
	I0429 19:54:52.969578   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |   <name>mk-force-systemd-flag-090341</name>
	I0429 19:54:52.969597   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |   <dns enable='no'/>
	I0429 19:54:52.969620   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |   
	I0429 19:54:52.969642   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0429 19:54:52.969652   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |     <dhcp>
	I0429 19:54:52.969663   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0429 19:54:52.969669   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |     </dhcp>
	I0429 19:54:52.969674   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |   </ip>
	I0429 19:54:52.969680   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG |   
	I0429 19:54:52.969685   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | </network>
	I0429 19:54:52.969692   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | 
	I0429 19:54:52.975119   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | trying to create private KVM network mk-force-systemd-flag-090341 192.168.72.0/24...
	I0429 19:54:53.049705   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | private KVM network mk-force-systemd-flag-090341 192.168.72.0/24 created
	I0429 19:54:53.049742   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341 ...
	I0429 19:54:53.049759   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:53.049644   61938 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:54:53.049809   61545 main.go:141] libmachine: (force-systemd-flag-090341) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 19:54:53.049847   61545 main.go:141] libmachine: (force-systemd-flag-090341) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 19:54:53.280389   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:53.280214   61938 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341/id_rsa...
	I0429 19:54:53.369397   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:53.369239   61938 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341/force-systemd-flag-090341.rawdisk...
	I0429 19:54:53.369435   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Writing magic tar header
	I0429 19:54:53.369455   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Writing SSH key tar header
	I0429 19:54:53.369474   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:53.369360   61938 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341 ...
	I0429 19:54:53.369491   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341 (perms=drwx------)
	I0429 19:54:53.369512   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341
	I0429 19:54:53.369550   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 19:54:53.369575   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 19:54:53.369590   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:54:53.369604   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 19:54:53.369618   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 19:54:53.369635   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home/jenkins
	I0429 19:54:53.369649   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Checking permissions on dir: /home
	I0429 19:54:53.369664   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 19:54:53.369679   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 19:54:53.369692   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 19:54:53.369706   61545 main.go:141] libmachine: (force-systemd-flag-090341) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 19:54:53.369714   61545 main.go:141] libmachine: (force-systemd-flag-090341) Creating domain...
	I0429 19:54:53.369729   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | Skipping /home - not owner
	I0429 19:54:53.371595   61545 main.go:141] libmachine: (force-systemd-flag-090341) define libvirt domain using xml: 
	I0429 19:54:53.371621   61545 main.go:141] libmachine: (force-systemd-flag-090341) <domain type='kvm'>
	I0429 19:54:53.371664   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <name>force-systemd-flag-090341</name>
	I0429 19:54:53.371689   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <memory unit='MiB'>2048</memory>
	I0429 19:54:53.371703   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <vcpu>2</vcpu>
	I0429 19:54:53.371715   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <features>
	I0429 19:54:53.371735   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <acpi/>
	I0429 19:54:53.371746   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <apic/>
	I0429 19:54:53.371753   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <pae/>
	I0429 19:54:53.371760   61545 main.go:141] libmachine: (force-systemd-flag-090341)     
	I0429 19:54:53.371769   61545 main.go:141] libmachine: (force-systemd-flag-090341)   </features>
	I0429 19:54:53.371776   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <cpu mode='host-passthrough'>
	I0429 19:54:53.371792   61545 main.go:141] libmachine: (force-systemd-flag-090341)   
	I0429 19:54:53.371799   61545 main.go:141] libmachine: (force-systemd-flag-090341)   </cpu>
	I0429 19:54:53.371807   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <os>
	I0429 19:54:53.371814   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <type>hvm</type>
	I0429 19:54:53.371822   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <boot dev='cdrom'/>
	I0429 19:54:53.371829   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <boot dev='hd'/>
	I0429 19:54:53.371850   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <bootmenu enable='no'/>
	I0429 19:54:53.371858   61545 main.go:141] libmachine: (force-systemd-flag-090341)   </os>
	I0429 19:54:53.371866   61545 main.go:141] libmachine: (force-systemd-flag-090341)   <devices>
	I0429 19:54:53.371874   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <disk type='file' device='cdrom'>
	I0429 19:54:53.371886   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341/boot2docker.iso'/>
	I0429 19:54:53.371896   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <target dev='hdc' bus='scsi'/>
	I0429 19:54:53.371905   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <readonly/>
	I0429 19:54:53.371911   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </disk>
	I0429 19:54:53.371920   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <disk type='file' device='disk'>
	I0429 19:54:53.371930   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 19:54:53.371943   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/force-systemd-flag-090341/force-systemd-flag-090341.rawdisk'/>
	I0429 19:54:53.371951   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <target dev='hda' bus='virtio'/>
	I0429 19:54:53.371983   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </disk>
	I0429 19:54:53.371999   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <interface type='network'>
	I0429 19:54:53.372014   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <source network='mk-force-systemd-flag-090341'/>
	I0429 19:54:53.372027   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <model type='virtio'/>
	I0429 19:54:53.372040   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </interface>
	I0429 19:54:53.372052   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <interface type='network'>
	I0429 19:54:53.372066   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <source network='default'/>
	I0429 19:54:53.372077   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <model type='virtio'/>
	I0429 19:54:53.372087   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </interface>
	I0429 19:54:53.372099   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <serial type='pty'>
	I0429 19:54:53.372112   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <target port='0'/>
	I0429 19:54:53.372127   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </serial>
	I0429 19:54:53.372142   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <console type='pty'>
	I0429 19:54:53.372154   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <target type='serial' port='0'/>
	I0429 19:54:53.372168   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </console>
	I0429 19:54:53.372179   61545 main.go:141] libmachine: (force-systemd-flag-090341)     <rng model='virtio'>
	I0429 19:54:53.372191   61545 main.go:141] libmachine: (force-systemd-flag-090341)       <backend model='random'>/dev/random</backend>
	I0429 19:54:53.372200   61545 main.go:141] libmachine: (force-systemd-flag-090341)     </rng>
	I0429 19:54:53.372212   61545 main.go:141] libmachine: (force-systemd-flag-090341)     
	I0429 19:54:53.372223   61545 main.go:141] libmachine: (force-systemd-flag-090341)     
	I0429 19:54:53.372235   61545 main.go:141] libmachine: (force-systemd-flag-090341)   </devices>
	I0429 19:54:53.372246   61545 main.go:141] libmachine: (force-systemd-flag-090341) </domain>
	I0429 19:54:53.372259   61545 main.go:141] libmachine: (force-systemd-flag-090341) 
	I0429 19:54:53.377894   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:da:70:cb in network default
	I0429 19:54:53.378555   61545 main.go:141] libmachine: (force-systemd-flag-090341) Ensuring networks are active...
	I0429 19:54:53.378592   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:db:9f:a1 in network mk-force-systemd-flag-090341
	I0429 19:54:53.379287   61545 main.go:141] libmachine: (force-systemd-flag-090341) Ensuring network default is active
	I0429 19:54:53.379595   61545 main.go:141] libmachine: (force-systemd-flag-090341) Ensuring network mk-force-systemd-flag-090341 is active
	I0429 19:54:53.380183   61545 main.go:141] libmachine: (force-systemd-flag-090341) Getting domain xml...
	I0429 19:54:53.380837   61545 main.go:141] libmachine: (force-systemd-flag-090341) Creating domain...
	I0429 19:54:54.712069   61545 main.go:141] libmachine: (force-systemd-flag-090341) Waiting to get IP...
	I0429 19:54:54.712949   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:db:9f:a1 in network mk-force-systemd-flag-090341
	I0429 19:54:54.713407   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | unable to find current IP address of domain force-systemd-flag-090341 in network mk-force-systemd-flag-090341
	I0429 19:54:54.713466   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:54.713402   61938 retry.go:31] will retry after 231.042588ms: waiting for machine to come up
	I0429 19:54:54.541902   61304 addons.go:505] duration metric: took 6.937577ms for enable addons: enabled=[]
	I0429 19:54:54.758273   61304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:54:54.783638   61304 node_ready.go:35] waiting up to 6m0s for node "pause-467472" to be "Ready" ...
	I0429 19:54:54.787713   61304 node_ready.go:49] node "pause-467472" has status "Ready":"True"
	I0429 19:54:54.787738   61304 node_ready.go:38] duration metric: took 4.064321ms for node "pause-467472" to be "Ready" ...
	I0429 19:54:54.787750   61304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:54:54.888608   61304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:55.283354   61304 pod_ready.go:92] pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:55.283385   61304 pod_ready.go:81] duration metric: took 394.745277ms for pod "coredns-7db6d8ff4d-lxtq2" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:55.283398   61304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:55.685493   61304 pod_ready.go:92] pod "etcd-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:55.685521   61304 pod_ready.go:81] duration metric: took 402.114974ms for pod "etcd-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:55.685534   61304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:56.082370   61304 pod_ready.go:92] pod "kube-apiserver-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:56.082401   61304 pod_ready.go:81] duration metric: took 396.858387ms for pod "kube-apiserver-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:56.082414   61304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:56.482131   61304 pod_ready.go:92] pod "kube-controller-manager-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:56.482157   61304 pod_ready.go:81] duration metric: took 399.734186ms for pod "kube-controller-manager-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:56.482171   61304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2brrw" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:56.882736   61304 pod_ready.go:92] pod "kube-proxy-2brrw" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:56.882766   61304 pod_ready.go:81] duration metric: took 400.586597ms for pod "kube-proxy-2brrw" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:56.882778   61304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:57.283776   61304 pod_ready.go:92] pod "kube-scheduler-pause-467472" in "kube-system" namespace has status "Ready":"True"
	I0429 19:54:57.283808   61304 pod_ready.go:81] duration metric: took 401.021104ms for pod "kube-scheduler-pause-467472" in "kube-system" namespace to be "Ready" ...
	I0429 19:54:57.283830   61304 pod_ready.go:38] duration metric: took 2.496067508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 19:54:57.283851   61304 api_server.go:52] waiting for apiserver process to appear ...
	I0429 19:54:57.283937   61304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:54:57.313110   61304 api_server.go:72] duration metric: took 2.778228022s to wait for apiserver process to appear ...
	I0429 19:54:57.313199   61304 api_server.go:88] waiting for apiserver healthz status ...
	I0429 19:54:57.313222   61304 api_server.go:253] Checking apiserver healthz at https://192.168.50.54:8443/healthz ...
	I0429 19:54:57.319584   61304 api_server.go:279] https://192.168.50.54:8443/healthz returned 200:
	ok
	I0429 19:54:57.320853   61304 api_server.go:141] control plane version: v1.30.0
	I0429 19:54:57.320887   61304 api_server.go:131] duration metric: took 7.677755ms to wait for apiserver health ...
	I0429 19:54:57.320897   61304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 19:54:57.486026   61304 system_pods.go:59] 6 kube-system pods found
	I0429 19:54:57.486054   61304 system_pods.go:61] "coredns-7db6d8ff4d-lxtq2" [db9d4855-6b30-41d8-b97d-2e8bab9e7135] Running
	I0429 19:54:57.486058   61304 system_pods.go:61] "etcd-pause-467472" [c4fcb8eb-d378-4229-b3d2-bf0d8da6d4a5] Running
	I0429 19:54:57.486062   61304 system_pods.go:61] "kube-apiserver-pause-467472" [52345820-7c48-453b-8b9a-1c837d664ea7] Running
	I0429 19:54:57.486076   61304 system_pods.go:61] "kube-controller-manager-pause-467472" [7cabe046-247c-4cda-83e2-3a34ebf9db66] Running
	I0429 19:54:57.486079   61304 system_pods.go:61] "kube-proxy-2brrw" [dc85d0aa-db2c-4c9a-a318-19fd8634c217] Running
	I0429 19:54:57.486088   61304 system_pods.go:61] "kube-scheduler-pause-467472" [0455fda0-9152-4212-97f5-764a57a328dc] Running
	I0429 19:54:57.486094   61304 system_pods.go:74] duration metric: took 165.190494ms to wait for pod list to return data ...
	I0429 19:54:57.486104   61304 default_sa.go:34] waiting for default service account to be created ...
	I0429 19:54:57.682814   61304 default_sa.go:45] found service account: "default"
	I0429 19:54:57.682847   61304 default_sa.go:55] duration metric: took 196.735363ms for default service account to be created ...
	I0429 19:54:57.682860   61304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 19:54:57.886303   61304 system_pods.go:86] 6 kube-system pods found
	I0429 19:54:57.886342   61304 system_pods.go:89] "coredns-7db6d8ff4d-lxtq2" [db9d4855-6b30-41d8-b97d-2e8bab9e7135] Running
	I0429 19:54:57.886350   61304 system_pods.go:89] "etcd-pause-467472" [c4fcb8eb-d378-4229-b3d2-bf0d8da6d4a5] Running
	I0429 19:54:57.886357   61304 system_pods.go:89] "kube-apiserver-pause-467472" [52345820-7c48-453b-8b9a-1c837d664ea7] Running
	I0429 19:54:57.886364   61304 system_pods.go:89] "kube-controller-manager-pause-467472" [7cabe046-247c-4cda-83e2-3a34ebf9db66] Running
	I0429 19:54:57.886370   61304 system_pods.go:89] "kube-proxy-2brrw" [dc85d0aa-db2c-4c9a-a318-19fd8634c217] Running
	I0429 19:54:57.886377   61304 system_pods.go:89] "kube-scheduler-pause-467472" [0455fda0-9152-4212-97f5-764a57a328dc] Running
	I0429 19:54:57.886387   61304 system_pods.go:126] duration metric: took 203.520155ms to wait for k8s-apps to be running ...
	I0429 19:54:57.886405   61304 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 19:54:57.886470   61304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:54:57.908443   61304 system_svc.go:56] duration metric: took 22.028252ms WaitForService to wait for kubelet
	I0429 19:54:57.908478   61304 kubeadm.go:576] duration metric: took 3.373599308s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:54:57.908502   61304 node_conditions.go:102] verifying NodePressure condition ...
	I0429 19:54:58.081494   61304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 19:54:58.081518   61304 node_conditions.go:123] node cpu capacity is 2
	I0429 19:54:58.081528   61304 node_conditions.go:105] duration metric: took 173.020499ms to run NodePressure ...
	I0429 19:54:58.081538   61304 start.go:240] waiting for startup goroutines ...
	I0429 19:54:58.081545   61304 start.go:245] waiting for cluster config update ...
	I0429 19:54:58.081551   61304 start.go:254] writing updated cluster config ...
	I0429 19:54:58.081823   61304 ssh_runner.go:195] Run: rm -f paused
	I0429 19:54:58.139557   61304 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 19:54:58.141721   61304 out.go:177] * Done! kubectl is now configured to use "pause-467472" cluster and "default" namespace by default
	I0429 19:54:54.675834   61469 main.go:141] libmachine: (cert-expiration-509508) Calling .GetIP
	I0429 19:54:54.679090   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:54.679533   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:1a:b3", ip: ""} in network mk-cert-expiration-509508: {Iface:virbr3 ExpiryTime:2024-04-29 20:54:43 +0000 UTC Type:0 Mac:52:54:00:a6:1a:b3 Iaid: IPaddr:192.168.61.227 Prefix:24 Hostname:cert-expiration-509508 Clientid:01:52:54:00:a6:1a:b3}
	I0429 19:54:54.679567   61469 main.go:141] libmachine: (cert-expiration-509508) DBG | domain cert-expiration-509508 has defined IP address 192.168.61.227 and MAC address 52:54:00:a6:1a:b3 in network mk-cert-expiration-509508
	I0429 19:54:54.679885   61469 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0429 19:54:54.686116   61469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:54:54.703662   61469 kubeadm.go:877] updating cluster {Name:cert-expiration-509508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0 ClusterName:cert-expiration-509508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 19:54:54.703782   61469 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 19:54:54.703831   61469 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:54:54.752480   61469 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 19:54:54.752544   61469 ssh_runner.go:195] Run: which lz4
	I0429 19:54:54.758145   61469 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 19:54:54.763305   61469 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 19:54:54.763331   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 19:54:56.562059   61469 crio.go:462] duration metric: took 1.80394204s to copy over tarball
	I0429 19:54:56.562173   61469 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 19:54:59.400514   61469 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.838312491s)
	I0429 19:54:59.400535   61469 crio.go:469] duration metric: took 2.838439187s to extract the tarball
	I0429 19:54:59.400541   61469 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 19:54:59.454315   61469 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:54:54.945905   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:db:9f:a1 in network mk-force-systemd-flag-090341
	I0429 19:54:54.946453   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | unable to find current IP address of domain force-systemd-flag-090341 in network mk-force-systemd-flag-090341
	I0429 19:54:54.946477   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:54.946418   61938 retry.go:31] will retry after 258.103559ms: waiting for machine to come up
	I0429 19:54:55.205956   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:db:9f:a1 in network mk-force-systemd-flag-090341
	I0429 19:54:55.206467   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | unable to find current IP address of domain force-systemd-flag-090341 in network mk-force-systemd-flag-090341
	I0429 19:54:55.206502   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:55.206414   61938 retry.go:31] will retry after 329.654651ms: waiting for machine to come up
	I0429 19:54:55.538009   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:db:9f:a1 in network mk-force-systemd-flag-090341
	I0429 19:54:55.538570   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | unable to find current IP address of domain force-systemd-flag-090341 in network mk-force-systemd-flag-090341
	I0429 19:54:55.538599   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:55.538509   61938 retry.go:31] will retry after 466.071962ms: waiting for machine to come up
	I0429 19:54:56.006141   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:db:9f:a1 in network mk-force-systemd-flag-090341
	I0429 19:54:56.006704   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | unable to find current IP address of domain force-systemd-flag-090341 in network mk-force-systemd-flag-090341
	I0429 19:54:56.006732   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:56.006669   61938 retry.go:31] will retry after 616.961454ms: waiting for machine to come up
	I0429 19:54:56.625568   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:db:9f:a1 in network mk-force-systemd-flag-090341
	I0429 19:54:56.626112   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | unable to find current IP address of domain force-systemd-flag-090341 in network mk-force-systemd-flag-090341
	I0429 19:54:56.626147   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:56.626046   61938 retry.go:31] will retry after 850.629735ms: waiting for machine to come up
	I0429 19:54:57.478267   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:db:9f:a1 in network mk-force-systemd-flag-090341
	I0429 19:54:57.478780   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | unable to find current IP address of domain force-systemd-flag-090341 in network mk-force-systemd-flag-090341
	I0429 19:54:57.478804   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:57.478698   61938 retry.go:31] will retry after 1.043840064s: waiting for machine to come up
	I0429 19:54:58.524552   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | domain force-systemd-flag-090341 has defined MAC address 52:54:00:db:9f:a1 in network mk-force-systemd-flag-090341
	I0429 19:54:58.525134   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | unable to find current IP address of domain force-systemd-flag-090341 in network mk-force-systemd-flag-090341
	I0429 19:54:58.525165   61545 main.go:141] libmachine: (force-systemd-flag-090341) DBG | I0429 19:54:58.525093   61938 retry.go:31] will retry after 1.392842793s: waiting for machine to come up
	I0429 19:54:59.529446   61469 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 19:54:59.621467   61469 cache_images.go:84] Images are preloaded, skipping loading
	I0429 19:54:59.621490   61469 kubeadm.go:928] updating node { 192.168.61.227 8443 v1.30.0 crio true true} ...
	I0429 19:54:59.621616   61469 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-509508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:cert-expiration-509508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:54:59.621680   61469 ssh_runner.go:195] Run: crio config
	I0429 19:54:59.679094   61469 cni.go:84] Creating CNI manager for ""
	I0429 19:54:59.679115   61469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:54:59.679133   61469 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 19:54:59.679161   61469 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.227 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-509508 NodeName:cert-expiration-509508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 19:54:59.679412   61469 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-509508"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 19:54:59.679481   61469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 19:54:59.692598   61469 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 19:54:59.692658   61469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 19:54:59.703732   61469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0429 19:54:59.722292   61469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:54:59.744302   61469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0429 19:54:59.765666   61469 ssh_runner.go:195] Run: grep 192.168.61.227	control-plane.minikube.internal$ /etc/hosts
	I0429 19:54:59.770597   61469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.227	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:54:59.785899   61469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:54:59.914990   61469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:54:59.933657   61469 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508 for IP: 192.168.61.227
	I0429 19:54:59.933668   61469 certs.go:194] generating shared ca certs ...
	I0429 19:54:59.933684   61469 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:54:59.933851   61469 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:54:59.933886   61469 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:54:59.933891   61469 certs.go:256] generating profile certs ...
	I0429 19:54:59.933937   61469 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/client.key
	I0429 19:54:59.933946   61469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/client.crt with IP's: []
	I0429 19:55:00.217326   61469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/client.crt ...
	I0429 19:55:00.217360   61469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/client.crt: {Name:mk83b97343a3fb113dcf856a1aa1b1fe91e3e434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:55:00.217549   61469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/client.key ...
	I0429 19:55:00.217559   61469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/client.key: {Name:mk1c604ad888a09c67c6fe72e1b6ae432d8629ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:55:00.217637   61469 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/apiserver.key.2793b1c2
	I0429 19:55:00.217653   61469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/apiserver.crt.2793b1c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.227]
	I0429 19:55:00.505851   61469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/apiserver.crt.2793b1c2 ...
	I0429 19:55:00.505866   61469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/apiserver.crt.2793b1c2: {Name:mk29f8f332a597231715858267ee72df2c740e26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:55:00.549758   61469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/apiserver.key.2793b1c2 ...
	I0429 19:55:00.549790   61469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/apiserver.key.2793b1c2: {Name:mkf1e4eefc7c61e28654cb466d9239e09f4ccd12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:55:00.549944   61469 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/apiserver.crt.2793b1c2 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/apiserver.crt
	I0429 19:55:00.550044   61469 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/apiserver.key.2793b1c2 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/apiserver.key
	I0429 19:55:00.550156   61469 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/proxy-client.key
	I0429 19:55:00.550174   61469 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/proxy-client.crt with IP's: []
	I0429 19:55:00.715719   61469 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/proxy-client.crt ...
	I0429 19:55:00.715734   61469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/proxy-client.crt: {Name:mk879e5ee0022249e9d773536d0a46445e2286a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:55:00.732313   61469 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/proxy-client.key ...
	I0429 19:55:00.732351   61469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/proxy-client.key: {Name:mkbf375d583dcf33fd772dfd1c265eb911bcdd37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:55:00.732652   61469 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:55:00.732697   61469 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:55:00.732706   61469 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:55:00.732737   61469 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:55:00.732761   61469 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:55:00.732831   61469 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:55:00.732880   61469 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:55:00.733653   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:55:00.778729   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:55:00.816938   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:55:00.847990   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:55:00.875779   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 19:55:00.904869   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 19:55:00.933135   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:55:00.964271   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/cert-expiration-509508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 19:55:00.996131   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:55:01.026886   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:55:01.056699   61469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:55:01.084865   61469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 19:55:01.105740   61469 ssh_runner.go:195] Run: openssl version
	I0429 19:55:01.113461   61469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:55:01.132423   61469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:55:01.139418   61469 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:55:01.139475   61469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:55:01.146687   61469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:55:01.162061   61469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:55:01.177821   61469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:55:01.183710   61469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:55:01.183770   61469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:55:01.190843   61469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:55:01.205917   61469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:55:01.218932   61469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:55:01.224596   61469 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:55:01.224663   61469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:55:01.231738   61469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:55:01.249979   61469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:55:01.256722   61469 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:55:01.256801   61469 kubeadm.go:391] StartCluster: {Name:cert-expiration-509508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.0 ClusterName:cert-expiration-509508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.227 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:55:01.256883   61469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 19:55:01.256933   61469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 19:55:01.304737   61469 cri.go:89] found id: ""
	I0429 19:55:01.304810   61469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 19:55:01.319652   61469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 19:55:01.331738   61469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 19:55:01.345065   61469 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 19:55:01.345076   61469 kubeadm.go:156] found existing configuration files:
	
	I0429 19:55:01.345130   61469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 19:55:01.356319   61469 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 19:55:01.356405   61469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 19:55:01.369135   61469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 19:55:01.380738   61469 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 19:55:01.380800   61469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 19:55:01.392692   61469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 19:55:01.404323   61469 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 19:55:01.404377   61469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 19:55:01.416299   61469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 19:55:01.429615   61469 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 19:55:01.429685   61469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 19:55:01.441204   61469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 19:55:01.587252   61469 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 19:55:01.587387   61469 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 19:55:01.749580   61469 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 19:55:01.749741   61469 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 19:55:01.749901   61469 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 19:55:01.980179   61469 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Apr 29 19:55:03 pause-467472 crio[2946]: time="2024-04-29 19:55:03.959637871Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420503959612209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ea0d645-b24c-4df5-8723-efca6d0e425e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:03 pause-467472 crio[2946]: time="2024-04-29 19:55:03.960774447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=373825e0-7327-4dc2-a85e-30a7ecf3082e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:03 pause-467472 crio[2946]: time="2024-04-29 19:55:03.960885576Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=373825e0-7327-4dc2-a85e-30a7ecf3082e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:03 pause-467472 crio[2946]: time="2024-04-29 19:55:03.961794508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:856a6562f72ccdce03ab5ddb42f18ec74b67d1b65d443d072fbf0f667d53bf75,PodSandboxId:9b118bcbdc20471ab822568594b4ab11daede88d374a96f5258660b8c1610f4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420479841655005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b04ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b041c92e4d095bc7e42cfaaa43da63fb5b59ec8a3ee3a6f384a612eebc5c08,PodSandboxId:2d2d64ea3347856ac8c54fab25e44946bd7f17c312367ca78e85808ea287b825,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714420479825619141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df4a7540f4dacf7548da27f73e85aecb1def304ce306c6ac46e6d3e883bebe8,PodSandboxId:cf63ca870504b7b727de89fa47b3a10e1ab43abe60b6f4bb243c045e5bf4c356,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714420475073492348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annot
ations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34387ffcf4be5242d03080b49b44ff2a9c95713764715d7c363b069cb7724f4a,PodSandboxId:e2135e6c888b708e90978af4b011949e65555a6bfb57b99f967277e9581e91ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714420475044030794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]
string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53997e74a83197f734d0b47f1285cebcec21e80d3d391876c898a3a9d2a3962,PodSandboxId:d1339d9ebd7d4682341a71c7d374fadf70b370b5f83b47d7047b86c820c75ff6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714420475061514516,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49ff880c87d08fbe442888ce45f7e407052b1ba54151444a06eddad58681ce4,PodSandboxId:b371b66423e5b34535b19361eeac285636f92eb985049fdbf0832a861bc623c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714420475030824690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io
.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474,PodSandboxId:ad8368c23007ce3c34e748af9272d03626a3988444f702d2f84aee415a10dbc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420469331756819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b0
4ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127,PodSandboxId:46a3e25f595fad1c3483e560aa411eddbd36327e65a574dc16922083a0732d95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714420468280348279,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03,PodSandboxId:b294341f60acf85f6f0bcd1eb836c817ba1dcec14914d2b2f33cf784b3802be9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714420468506632508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annotations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f,PodSandboxId:f91ce8b98a516b2d87a1402c268d51d766bd07895ae869e83d01f30d36fb4ae7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714420468187572152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878,PodSandboxId:5cdb7927839d9409dccc38594f86fdf5495b261ae455bbd1a176fca1fbcf25cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714420467996458839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98,PodSandboxId:4c0e67e1628662fb8ab7ca25f0a75703c302676fc1e8779a1112914a4a2ee73a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714420467876055127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=373825e0-7327-4dc2-a85e-30a7ecf3082e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.020660691Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8a9e2e3-a6d6-4589-8a70-95c2c116932f name=/runtime.v1.RuntimeService/Version
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.020788967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8a9e2e3-a6d6-4589-8a70-95c2c116932f name=/runtime.v1.RuntimeService/Version
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.025683262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5504cba-161e-45c9-9fc5-55768e799d3f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.026308421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420504026271160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5504cba-161e-45c9-9fc5-55768e799d3f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.031434944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f12b82d-0b2b-4d1e-8a25-27e960437853 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.031570848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f12b82d-0b2b-4d1e-8a25-27e960437853 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.032656115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:856a6562f72ccdce03ab5ddb42f18ec74b67d1b65d443d072fbf0f667d53bf75,PodSandboxId:9b118bcbdc20471ab822568594b4ab11daede88d374a96f5258660b8c1610f4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420479841655005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b04ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b041c92e4d095bc7e42cfaaa43da63fb5b59ec8a3ee3a6f384a612eebc5c08,PodSandboxId:2d2d64ea3347856ac8c54fab25e44946bd7f17c312367ca78e85808ea287b825,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714420479825619141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df4a7540f4dacf7548da27f73e85aecb1def304ce306c6ac46e6d3e883bebe8,PodSandboxId:cf63ca870504b7b727de89fa47b3a10e1ab43abe60b6f4bb243c045e5bf4c356,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714420475073492348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annot
ations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34387ffcf4be5242d03080b49b44ff2a9c95713764715d7c363b069cb7724f4a,PodSandboxId:e2135e6c888b708e90978af4b011949e65555a6bfb57b99f967277e9581e91ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714420475044030794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]
string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53997e74a83197f734d0b47f1285cebcec21e80d3d391876c898a3a9d2a3962,PodSandboxId:d1339d9ebd7d4682341a71c7d374fadf70b370b5f83b47d7047b86c820c75ff6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714420475061514516,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49ff880c87d08fbe442888ce45f7e407052b1ba54151444a06eddad58681ce4,PodSandboxId:b371b66423e5b34535b19361eeac285636f92eb985049fdbf0832a861bc623c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714420475030824690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io
.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474,PodSandboxId:ad8368c23007ce3c34e748af9272d03626a3988444f702d2f84aee415a10dbc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420469331756819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b0
4ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127,PodSandboxId:46a3e25f595fad1c3483e560aa411eddbd36327e65a574dc16922083a0732d95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714420468280348279,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03,PodSandboxId:b294341f60acf85f6f0bcd1eb836c817ba1dcec14914d2b2f33cf784b3802be9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714420468506632508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annotations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f,PodSandboxId:f91ce8b98a516b2d87a1402c268d51d766bd07895ae869e83d01f30d36fb4ae7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714420468187572152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878,PodSandboxId:5cdb7927839d9409dccc38594f86fdf5495b261ae455bbd1a176fca1fbcf25cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714420467996458839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98,PodSandboxId:4c0e67e1628662fb8ab7ca25f0a75703c302676fc1e8779a1112914a4a2ee73a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714420467876055127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f12b82d-0b2b-4d1e-8a25-27e960437853 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.086800832Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=65ef2584-3b94-460e-bb93-53c6cc6db2e5 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.087034742Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65ef2584-3b94-460e-bb93-53c6cc6db2e5 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.088113368Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=033efea9-e736-4222-b86f-4d0a7c656dbe name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.088478695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420504088456415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=033efea9-e736-4222-b86f-4d0a7c656dbe name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.089636250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fdd47fc-006e-44f8-a542-06947b79ac1b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.089713531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fdd47fc-006e-44f8-a542-06947b79ac1b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.090205697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:856a6562f72ccdce03ab5ddb42f18ec74b67d1b65d443d072fbf0f667d53bf75,PodSandboxId:9b118bcbdc20471ab822568594b4ab11daede88d374a96f5258660b8c1610f4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420479841655005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b04ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b041c92e4d095bc7e42cfaaa43da63fb5b59ec8a3ee3a6f384a612eebc5c08,PodSandboxId:2d2d64ea3347856ac8c54fab25e44946bd7f17c312367ca78e85808ea287b825,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714420479825619141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df4a7540f4dacf7548da27f73e85aecb1def304ce306c6ac46e6d3e883bebe8,PodSandboxId:cf63ca870504b7b727de89fa47b3a10e1ab43abe60b6f4bb243c045e5bf4c356,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714420475073492348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annot
ations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34387ffcf4be5242d03080b49b44ff2a9c95713764715d7c363b069cb7724f4a,PodSandboxId:e2135e6c888b708e90978af4b011949e65555a6bfb57b99f967277e9581e91ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714420475044030794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]
string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53997e74a83197f734d0b47f1285cebcec21e80d3d391876c898a3a9d2a3962,PodSandboxId:d1339d9ebd7d4682341a71c7d374fadf70b370b5f83b47d7047b86c820c75ff6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714420475061514516,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49ff880c87d08fbe442888ce45f7e407052b1ba54151444a06eddad58681ce4,PodSandboxId:b371b66423e5b34535b19361eeac285636f92eb985049fdbf0832a861bc623c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714420475030824690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io
.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474,PodSandboxId:ad8368c23007ce3c34e748af9272d03626a3988444f702d2f84aee415a10dbc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420469331756819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b0
4ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127,PodSandboxId:46a3e25f595fad1c3483e560aa411eddbd36327e65a574dc16922083a0732d95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714420468280348279,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03,PodSandboxId:b294341f60acf85f6f0bcd1eb836c817ba1dcec14914d2b2f33cf784b3802be9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714420468506632508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annotations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f,PodSandboxId:f91ce8b98a516b2d87a1402c268d51d766bd07895ae869e83d01f30d36fb4ae7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714420468187572152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878,PodSandboxId:5cdb7927839d9409dccc38594f86fdf5495b261ae455bbd1a176fca1fbcf25cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714420467996458839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98,PodSandboxId:4c0e67e1628662fb8ab7ca25f0a75703c302676fc1e8779a1112914a4a2ee73a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714420467876055127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fdd47fc-006e-44f8-a542-06947b79ac1b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.142501377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c316d52-31ca-4a59-bda1-746d4e7cd988 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.142572126Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c316d52-31ca-4a59-bda1-746d4e7cd988 name=/runtime.v1.RuntimeService/Version
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.144201215Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea5dc821-593a-4caf-88b4-b8bc8dc82d8d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.144608425Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714420504144581830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea5dc821-593a-4caf-88b4-b8bc8dc82d8d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.145602363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d46a187-cf88-430d-a5a0-7dfb1df50e7f name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.145655217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d46a187-cf88-430d-a5a0-7dfb1df50e7f name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 19:55:04 pause-467472 crio[2946]: time="2024-04-29 19:55:04.146007123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:856a6562f72ccdce03ab5ddb42f18ec74b67d1b65d443d072fbf0f667d53bf75,PodSandboxId:9b118bcbdc20471ab822568594b4ab11daede88d374a96f5258660b8c1610f4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714420479841655005,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b04ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08b041c92e4d095bc7e42cfaaa43da63fb5b59ec8a3ee3a6f384a612eebc5c08,PodSandboxId:2d2d64ea3347856ac8c54fab25e44946bd7f17c312367ca78e85808ea287b825,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714420479825619141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3df4a7540f4dacf7548da27f73e85aecb1def304ce306c6ac46e6d3e883bebe8,PodSandboxId:cf63ca870504b7b727de89fa47b3a10e1ab43abe60b6f4bb243c045e5bf4c356,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714420475073492348,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annot
ations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34387ffcf4be5242d03080b49b44ff2a9c95713764715d7c363b069cb7724f4a,PodSandboxId:e2135e6c888b708e90978af4b011949e65555a6bfb57b99f967277e9581e91ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714420475044030794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]
string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53997e74a83197f734d0b47f1285cebcec21e80d3d391876c898a3a9d2a3962,PodSandboxId:d1339d9ebd7d4682341a71c7d374fadf70b370b5f83b47d7047b86c820c75ff6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714420475061514516,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49ff880c87d08fbe442888ce45f7e407052b1ba54151444a06eddad58681ce4,PodSandboxId:b371b66423e5b34535b19361eeac285636f92eb985049fdbf0832a861bc623c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714420475030824690,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io
.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474,PodSandboxId:ad8368c23007ce3c34e748af9272d03626a3988444f702d2f84aee415a10dbc1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1714420469331756819,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-lxtq2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9d4855-6b30-41d8-b97d-2e8bab9e7135,},Annotations:map[string]string{io.kubernetes.container.hash: b2b0
4ff6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127,PodSandboxId:46a3e25f595fad1c3483e560aa411eddbd36327e65a574dc16922083a0732d95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1714420468280348279,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f992a9fcf53e2872d40008ece0172fbd,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03,PodSandboxId:b294341f60acf85f6f0bcd1eb836c817ba1dcec14914d2b2f33cf784b3802be9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1714420468506632508,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.na
me: etcd-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d26e3329a0ea81dbd74d160c1394b07,},Annotations:map[string]string{io.kubernetes.container.hash: f603962c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f,PodSandboxId:f91ce8b98a516b2d87a1402c268d51d766bd07895ae869e83d01f30d36fb4ae7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1714420468187572152,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2brrw,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: dc85d0aa-db2c-4c9a-a318-19fd8634c217,},Annotations:map[string]string{io.kubernetes.container.hash: a78ea40a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878,PodSandboxId:5cdb7927839d9409dccc38594f86fdf5495b261ae455bbd1a176fca1fbcf25cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1714420467996458839,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-467472,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 8dcbdc99f9290e9b69c37f3b43e3b6fe,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98,PodSandboxId:4c0e67e1628662fb8ab7ca25f0a75703c302676fc1e8779a1112914a4a2ee73a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1714420467876055127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-467472,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 45e9a8f319326875f7ad6b42a7279f00,},Annotations:map[string]string{io.kubernetes.container.hash: 3b9530dd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d46a187-cf88-430d-a5a0-7dfb1df50e7f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	856a6562f72cc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Running             coredns                   2                   9b118bcbdc204       coredns-7db6d8ff4d-lxtq2
	08b041c92e4d0       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   24 seconds ago      Running             kube-proxy                2                   2d2d64ea33478       kube-proxy-2brrw
	3df4a7540f4da       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   29 seconds ago      Running             etcd                      2                   cf63ca870504b       etcd-pause-467472
	e53997e74a831       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   29 seconds ago      Running             kube-controller-manager   2                   d1339d9ebd7d4       kube-controller-manager-pause-467472
	34387ffcf4be5       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   29 seconds ago      Running             kube-apiserver            2                   e2135e6c888b7       kube-apiserver-pause-467472
	f49ff880c87d0       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   29 seconds ago      Running             kube-scheduler            2                   b371b66423e5b       kube-scheduler-pause-467472
	a43a394dbf993       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   34 seconds ago      Exited              coredns                   1                   ad8368c23007c       coredns-7db6d8ff4d-lxtq2
	80ed718bb499c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   35 seconds ago      Exited              etcd                      1                   b294341f60acf       etcd-pause-467472
	cb26043d1744b       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   35 seconds ago      Exited              kube-controller-manager   1                   46a3e25f595fa       kube-controller-manager-pause-467472
	5fa5f157e8331       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   36 seconds ago      Exited              kube-proxy                1                   f91ce8b98a516       kube-proxy-2brrw
	0a877329ca5eb       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   36 seconds ago      Exited              kube-scheduler            1                   5cdb7927839d9       kube-scheduler-pause-467472
	13560af33a9dc       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   36 seconds ago      Exited              kube-apiserver            1                   4c0e67e162866       kube-apiserver-pause-467472
	
	
	==> coredns [856a6562f72ccdce03ab5ddb42f18ec74b67d1b65d443d072fbf0f667d53bf75] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41287 - 6138 "HINFO IN 8308597871302896540.5934827365394592310. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016164531s
	
	
	==> coredns [a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474] <==
	
	
	==> describe nodes <==
	Name:               pause-467472
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-467472
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=pause-467472
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T19_53_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:53:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-467472
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:54:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:54:39 +0000   Mon, 29 Apr 2024 19:53:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:54:39 +0000   Mon, 29 Apr 2024 19:53:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:54:39 +0000   Mon, 29 Apr 2024 19:53:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:54:39 +0000   Mon, 29 Apr 2024 19:53:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.54
	  Hostname:    pause-467472
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 af5f42a8f5094c06bdc81621083e473c
	  System UUID:                af5f42a8-f509-4c06-bdc8-1621083e473c
	  Boot ID:                    9b81afe2-f057-478b-8949-1c6f4d94b8ba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-lxtq2                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     66s
	  kube-system                 etcd-pause-467472                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         79s
	  kube-system                 kube-apiserver-pause-467472             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-controller-manager-pause-467472    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-proxy-2brrw                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-scheduler-pause-467472             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 64s                kube-proxy       
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  86s (x8 over 86s)  kubelet          Node pause-467472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s (x8 over 86s)  kubelet          Node pause-467472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x7 over 86s)  kubelet          Node pause-467472 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    79s                kubelet          Node pause-467472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  79s                kubelet          Node pause-467472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     79s                kubelet          Node pause-467472 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                78s                kubelet          Node pause-467472 status is now: NodeReady
	  Normal  RegisteredNode           67s                node-controller  Node pause-467472 event: Registered Node pause-467472 in Controller
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s (x8 over 30s)  kubelet          Node pause-467472 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x8 over 30s)  kubelet          Node pause-467472 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x7 over 30s)  kubelet          Node pause-467472 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13s                node-controller  Node pause-467472 event: Registered Node pause-467472 in Controller
	
	
	==> dmesg <==
	[  +0.062656] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067621] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.226113] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.140985] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.368236] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +5.097682] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.062955] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.176716] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +1.043992] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.024170] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.092580] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.106205] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.889035] systemd-fstab-generator[1527]: Ignoring "noauto" option for root device
	[Apr29 19:54] kauditd_printk_skb: 98 callbacks suppressed
	[  +0.352053] systemd-fstab-generator[2470]: Ignoring "noauto" option for root device
	[  +0.470546] systemd-fstab-generator[2643]: Ignoring "noauto" option for root device
	[  +0.611996] systemd-fstab-generator[2787]: Ignoring "noauto" option for root device
	[  +0.235917] systemd-fstab-generator[2814]: Ignoring "noauto" option for root device
	[  +0.600005] systemd-fstab-generator[2911]: Ignoring "noauto" option for root device
	[  +1.996961] systemd-fstab-generator[3512]: Ignoring "noauto" option for root device
	[  +2.635077] systemd-fstab-generator[3636]: Ignoring "noauto" option for root device
	[  +0.087594] kauditd_printk_skb: 244 callbacks suppressed
	[  +5.578161] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.927849] kauditd_printk_skb: 2 callbacks suppressed
	[  +2.750540] systemd-fstab-generator[4074]: Ignoring "noauto" option for root device
	
	
	==> etcd [3df4a7540f4dacf7548da27f73e85aecb1def304ce306c6ac46e6d3e883bebe8] <==
	{"level":"info","ts":"2024-04-29T19:54:35.714605Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","added-peer-id":"b0a6bbe4c9ddfbc1","added-peer-peer-urls":["https://192.168.50.54:2380"]}
	{"level":"info","ts":"2024-04-29T19:54:35.714833Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:54:35.714878Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:54:35.719361Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T19:54:35.719687Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b0a6bbe4c9ddfbc1","initial-advertise-peer-urls":["https://192.168.50.54:2380"],"listen-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:54:35.719722Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T19:54:35.719834Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-04-29T19:54:35.719844Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-04-29T19:54:37.507741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T19:54:37.507842Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T19:54:37.508032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgPreVoteResp from b0a6bbe4c9ddfbc1 at term 2"}
	{"level":"info","ts":"2024-04-29T19:54:37.508076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T19:54:37.508101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgVoteResp from b0a6bbe4c9ddfbc1 at term 3"}
	{"level":"info","ts":"2024-04-29T19:54:37.508136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became leader at term 3"}
	{"level":"info","ts":"2024-04-29T19:54:37.508163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0a6bbe4c9ddfbc1 elected leader b0a6bbe4c9ddfbc1 at term 3"}
	{"level":"info","ts":"2024-04-29T19:54:37.514773Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b0a6bbe4c9ddfbc1","local-member-attributes":"{Name:pause-467472 ClientURLs:[https://192.168.50.54:2379]}","request-path":"/0/members/b0a6bbe4c9ddfbc1/attributes","cluster-id":"b7dc4198fc8444d0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:54:37.51481Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:54:37.515329Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T19:54:37.515385Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T19:54:37.514842Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:54:37.517244Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T19:54:37.518204Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.54:2379"}
	{"level":"warn","ts":"2024-04-29T19:54:59.744239Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.12829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T19:54:59.744692Z","caller":"traceutil/trace.go:171","msg":"trace[1065081412] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:506; }","duration":"135.636915ms","start":"2024-04-29T19:54:59.609022Z","end":"2024-04-29T19:54:59.744659Z","steps":["trace[1065081412] 'range keys from in-memory index tree'  (duration: 135.034941ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T19:55:00.623861Z","caller":"traceutil/trace.go:171","msg":"trace[952986955] transaction","detail":"{read_only:false; response_revision:507; number_of_response:1; }","duration":"288.035252ms","start":"2024-04-29T19:55:00.335799Z","end":"2024-04-29T19:55:00.623835Z","steps":["trace[952986955] 'process raft request'  (duration: 243.806479ms)","trace[952986955] 'compare'  (duration: 43.916865ms)"],"step_count":2}
	
	
	==> etcd [80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03] <==
	{"level":"info","ts":"2024-04-29T19:54:29.267592Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"103.777435ms"}
	{"level":"info","ts":"2024-04-29T19:54:29.315075Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-04-29T19:54:29.406314Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","commit-index":445}
	{"level":"info","ts":"2024-04-29T19:54:29.406544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 switched to configuration voters=()"}
	{"level":"info","ts":"2024-04-29T19:54:29.40659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became follower at term 2"}
	{"level":"info","ts":"2024-04-29T19:54:29.406614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b0a6bbe4c9ddfbc1 [peers: [], term: 2, commit: 445, applied: 0, lastindex: 445, lastterm: 2]"}
	{"level":"warn","ts":"2024-04-29T19:54:29.417287Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-04-29T19:54:29.469698Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":427}
	{"level":"info","ts":"2024-04-29T19:54:29.476518Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-04-29T19:54:29.49124Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b0a6bbe4c9ddfbc1","timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:54:29.495035Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b0a6bbe4c9ddfbc1"}
	{"level":"info","ts":"2024-04-29T19:54:29.495839Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"b0a6bbe4c9ddfbc1","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-29T19:54:29.510497Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-29T19:54:29.510826Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:54:29.51104Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:54:29.511087Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-29T19:54:29.511477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 switched to configuration voters=(12729067988122991553)"}
	{"level":"info","ts":"2024-04-29T19:54:29.512496Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","added-peer-id":"b0a6bbe4c9ddfbc1","added-peer-peer-urls":["https://192.168.50.54:2380"]}
	{"level":"info","ts":"2024-04-29T19:54:29.537656Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:54:29.537814Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:54:29.581106Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-04-29T19:54:29.581148Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-04-29T19:54:29.58123Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T19:54:29.58159Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b0a6bbe4c9ddfbc1","initial-advertise-peer-urls":["https://192.168.50.54:2380"],"listen-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:54:29.581621Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 19:55:04 up 1 min,  0 users,  load average: 1.23, 0.53, 0.19
	Linux pause-467472 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98] <==
	I0429 19:54:28.576636       1 options.go:221] external host was not specified, using 192.168.50.54
	I0429 19:54:28.580875       1 server.go:148] Version: v1.30.0
	I0429 19:54:28.581027       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [34387ffcf4be5242d03080b49b44ff2a9c95713764715d7c363b069cb7724f4a] <==
	I0429 19:54:38.904200       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0429 19:54:38.905465       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 19:54:38.905548       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 19:54:38.905556       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 19:54:38.906286       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0429 19:54:38.906716       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0429 19:54:38.917811       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0429 19:54:38.933552       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0429 19:54:38.933626       1 policy_source.go:224] refreshing policies
	I0429 19:54:38.936751       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 19:54:38.948881       1 aggregator.go:165] initial CRD sync complete...
	I0429 19:54:38.949087       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 19:54:38.949121       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 19:54:38.949146       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:54:38.949561       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:54:38.949888       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0429 19:54:38.987320       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0429 19:54:39.810716       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 19:54:40.799346       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 19:54:40.811285       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 19:54:40.866236       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 19:54:40.906866       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 19:54:40.931366       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 19:54:51.666546       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 19:54:51.820332       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127] <==
	
	
	==> kube-controller-manager [e53997e74a83197f734d0b47f1285cebcec21e80d3d391876c898a3a9d2a3962] <==
	I0429 19:54:51.693391       1 shared_informer.go:320] Caches are synced for stateful set
	I0429 19:54:51.704343       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0429 19:54:51.727206       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"pause-467472\" does not exist"
	I0429 19:54:51.750331       1 shared_informer.go:320] Caches are synced for TTL
	I0429 19:54:51.762791       1 shared_informer.go:320] Caches are synced for GC
	I0429 19:54:51.768193       1 shared_informer.go:320] Caches are synced for node
	I0429 19:54:51.768259       1 shared_informer.go:320] Caches are synced for disruption
	I0429 19:54:51.768299       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0429 19:54:51.768320       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0429 19:54:51.768350       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0429 19:54:51.768356       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0429 19:54:51.774738       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 19:54:51.788879       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 19:54:51.799146       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0429 19:54:51.799761       1 shared_informer.go:320] Caches are synced for daemon sets
	I0429 19:54:51.803382       1 shared_informer.go:320] Caches are synced for taint
	I0429 19:54:51.803798       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0429 19:54:51.804097       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-467472"
	I0429 19:54:51.804268       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0429 19:54:51.812945       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 19:54:51.822715       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0429 19:54:51.825500       1 shared_informer.go:320] Caches are synced for resource quota
	I0429 19:54:52.243380       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:54:52.243504       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 19:54:52.282047       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [08b041c92e4d095bc7e42cfaaa43da63fb5b59ec8a3ee3a6f384a612eebc5c08] <==
	I0429 19:54:40.058216       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:54:40.086441       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.54"]
	I0429 19:54:40.172855       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:54:40.173024       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:54:40.173056       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:54:40.176742       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:54:40.176999       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:54:40.177044       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:54:40.178354       1 config.go:192] "Starting service config controller"
	I0429 19:54:40.178402       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:54:40.178428       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:54:40.178431       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:54:40.178864       1 config.go:319] "Starting node config controller"
	I0429 19:54:40.179003       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:54:40.278818       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:54:40.278993       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:54:40.279062       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f] <==
	
	
	==> kube-scheduler [0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878] <==
	
	
	==> kube-scheduler [f49ff880c87d08fbe442888ce45f7e407052b1ba54151444a06eddad58681ce4] <==
	I0429 19:54:36.422358       1 serving.go:380] Generated self-signed cert in-memory
	W0429 19:54:38.845512       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0429 19:54:38.845568       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 19:54:38.845579       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0429 19:54:38.845589       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0429 19:54:38.934856       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 19:54:38.935012       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:54:38.953513       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 19:54:38.953647       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 19:54:38.953085       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 19:54:38.958136       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 19:54:39.059221       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 19:54:35 pause-467472 kubelet[3643]: I0429 19:54:35.011565    3643 scope.go:117] "RemoveContainer" containerID="cb26043d1744b0e27644bdd5a8f34835683bedc9dcc08a1e1c1c2b07cda89127"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: I0429 19:54:35.011991    3643 scope.go:117] "RemoveContainer" containerID="0a877329ca5eb47d2eadf1c18f3f2091dea760b6e9d962d14e7c882f854bb878"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: I0429 19:54:35.012200    3643 scope.go:117] "RemoveContainer" containerID="80ed718bb499c55acb1feb339adcd1401d1da0ca245633dae77fd5c49ec6ef03"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: I0429 19:54:35.015479    3643 scope.go:117] "RemoveContainer" containerID="13560af33a9dce7ccf8d5edc13a5ac3b8192c21a14a1d74c86e409357f505e98"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: E0429 19:54:35.115779    3643 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-467472?timeout=10s\": dial tcp 192.168.50.54:8443: connect: connection refused" interval="800ms"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: I0429 19:54:35.225583    3643 kubelet_node_status.go:73] "Attempting to register node" node="pause-467472"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: E0429 19:54:35.226409    3643 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.54:8443: connect: connection refused" node="pause-467472"
	Apr 29 19:54:35 pause-467472 kubelet[3643]: W0429 19:54:35.298454    3643 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-467472&limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 29 19:54:35 pause-467472 kubelet[3643]: E0429 19:54:35.298549    3643 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-467472&limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 29 19:54:35 pause-467472 kubelet[3643]: W0429 19:54:35.475863    3643 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 29 19:54:35 pause-467472 kubelet[3643]: E0429 19:54:35.476020    3643 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Apr 29 19:54:36 pause-467472 kubelet[3643]: I0429 19:54:36.028581    3643 kubelet_node_status.go:73] "Attempting to register node" node="pause-467472"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.018649    3643 kubelet_node_status.go:112] "Node was previously registered" node="pause-467472"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.019139    3643 kubelet_node_status.go:76] "Successfully registered node" node="pause-467472"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.020850    3643 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.022075    3643 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.488606    3643 apiserver.go:52] "Watching apiserver"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.492186    3643 topology_manager.go:215] "Topology Admit Handler" podUID="db9d4855-6b30-41d8-b97d-2e8bab9e7135" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lxtq2"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.492393    3643 topology_manager.go:215] "Topology Admit Handler" podUID="dc85d0aa-db2c-4c9a-a318-19fd8634c217" podNamespace="kube-system" podName="kube-proxy-2brrw"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.503773    3643 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.541836    3643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc85d0aa-db2c-4c9a-a318-19fd8634c217-lib-modules\") pod \"kube-proxy-2brrw\" (UID: \"dc85d0aa-db2c-4c9a-a318-19fd8634c217\") " pod="kube-system/kube-proxy-2brrw"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.542043    3643 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc85d0aa-db2c-4c9a-a318-19fd8634c217-xtables-lock\") pod \"kube-proxy-2brrw\" (UID: \"dc85d0aa-db2c-4c9a-a318-19fd8634c217\") " pod="kube-system/kube-proxy-2brrw"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.793161    3643 scope.go:117] "RemoveContainer" containerID="5fa5f157e8331a96ddcfb01245b8bcd3e83b3e0c1a86f692339d9b6caba3858f"
	Apr 29 19:54:39 pause-467472 kubelet[3643]: I0429 19:54:39.793441    3643 scope.go:117] "RemoveContainer" containerID="a43a394dbf9934e3dc9ed65529f9a97129f035af300089973245a90d2e2e8474"
	Apr 29 19:54:48 pause-467472 kubelet[3643]: I0429 19:54:48.139263    3643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-467472 -n pause-467472
helpers_test.go:261: (dbg) Run:  kubectl --context pause-467472 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (62.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (294.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-919612 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-919612 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m54.016111721s)

                                                
                                                
-- stdout --
	* [old-k8s-version-919612] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-919612" primary control-plane node in "old-k8s-version-919612" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:55:41.380580   62888 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:55:41.380706   62888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:55:41.380717   62888 out.go:304] Setting ErrFile to fd 2...
	I0429 19:55:41.380724   62888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:55:41.381040   62888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:55:41.381708   62888 out.go:298] Setting JSON to false
	I0429 19:55:41.382690   62888 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5839,"bootTime":1714414702,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 19:55:41.382749   62888 start.go:139] virtualization: kvm guest
	I0429 19:55:41.385085   62888 out.go:177] * [old-k8s-version-919612] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 19:55:41.386943   62888 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:55:41.387020   62888 notify.go:220] Checking for updates...
	I0429 19:55:41.388480   62888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:55:41.390079   62888 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:55:41.391675   62888 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:55:41.393326   62888 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 19:55:41.396080   62888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:55:41.398229   62888 config.go:182] Loaded profile config "cert-expiration-509508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:55:41.398411   62888 config.go:182] Loaded profile config "cert-options-437743": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:55:41.398542   62888 config.go:182] Loaded profile config "kubernetes-upgrade-935578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:55:41.398712   62888 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:55:41.435875   62888 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 19:55:41.437142   62888 start.go:297] selected driver: kvm2
	I0429 19:55:41.437157   62888 start.go:901] validating driver "kvm2" against <nil>
	I0429 19:55:41.437168   62888 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:55:41.437836   62888 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:55:41.437908   62888 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 19:55:41.454338   62888 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 19:55:41.454392   62888 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 19:55:41.454635   62888 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 19:55:41.454708   62888 cni.go:84] Creating CNI manager for ""
	I0429 19:55:41.454725   62888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:55:41.454736   62888 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 19:55:41.454811   62888 start.go:340] cluster config:
	{Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:55:41.454944   62888 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 19:55:41.457073   62888 out.go:177] * Starting "old-k8s-version-919612" primary control-plane node in "old-k8s-version-919612" cluster
	I0429 19:55:41.458508   62888 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 19:55:41.458564   62888 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0429 19:55:41.458579   62888 cache.go:56] Caching tarball of preloaded images
	I0429 19:55:41.458673   62888 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 19:55:41.458686   62888 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0429 19:55:41.458804   62888 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json ...
	I0429 19:55:41.458828   62888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json: {Name:mkdb2cecd76ba01739d27fb17a68ae70ffb28975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:55:41.458981   62888 start.go:360] acquireMachinesLock for old-k8s-version-919612: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 19:55:55.503707   62888 start.go:364] duration metric: took 14.044686977s to acquireMachinesLock for "old-k8s-version-919612"
	I0429 19:55:55.503783   62888 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 19:55:55.503938   62888 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 19:55:55.506347   62888 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 19:55:55.506568   62888 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:55:55.506623   62888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:55:55.523507   62888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0429 19:55:55.524006   62888 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:55:55.524616   62888 main.go:141] libmachine: Using API Version  1
	I0429 19:55:55.524641   62888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:55:55.525792   62888 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:55:55.525979   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 19:55:55.526178   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 19:55:55.526367   62888 start.go:159] libmachine.API.Create for "old-k8s-version-919612" (driver="kvm2")
	I0429 19:55:55.526429   62888 client.go:168] LocalClient.Create starting
	I0429 19:55:55.526462   62888 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 19:55:55.526497   62888 main.go:141] libmachine: Decoding PEM data...
	I0429 19:55:55.526523   62888 main.go:141] libmachine: Parsing certificate...
	I0429 19:55:55.526591   62888 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 19:55:55.526618   62888 main.go:141] libmachine: Decoding PEM data...
	I0429 19:55:55.526634   62888 main.go:141] libmachine: Parsing certificate...
	I0429 19:55:55.526663   62888 main.go:141] libmachine: Running pre-create checks...
	I0429 19:55:55.526675   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .PreCreateCheck
	I0429 19:55:55.527999   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetConfigRaw
	I0429 19:55:55.528359   62888 main.go:141] libmachine: Creating machine...
	I0429 19:55:55.528372   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .Create
	I0429 19:55:55.528497   62888 main.go:141] libmachine: (old-k8s-version-919612) Creating KVM machine...
	I0429 19:55:55.529689   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found existing default KVM network
	I0429 19:55:55.530810   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:55.530647   63060 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:31:bc:7c} reservation:<nil>}
	I0429 19:55:55.531809   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:55.531731   63060 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e8:e8:e4} reservation:<nil>}
	I0429 19:55:55.532675   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:55.532607   63060 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:99:a1:58} reservation:<nil>}
	I0429 19:55:55.533708   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:55.533614   63060 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003091b0}
	I0429 19:55:55.533752   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | created network xml: 
	I0429 19:55:55.533786   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | <network>
	I0429 19:55:55.533798   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG |   <name>mk-old-k8s-version-919612</name>
	I0429 19:55:55.533807   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG |   <dns enable='no'/>
	I0429 19:55:55.533817   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG |   
	I0429 19:55:55.533828   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0429 19:55:55.533853   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG |     <dhcp>
	I0429 19:55:55.533864   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0429 19:55:55.533875   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG |     </dhcp>
	I0429 19:55:55.533883   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG |   </ip>
	I0429 19:55:55.533893   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG |   
	I0429 19:55:55.533901   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | </network>
	I0429 19:55:55.533914   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | 
	I0429 19:55:55.539648   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | trying to create private KVM network mk-old-k8s-version-919612 192.168.72.0/24...
	I0429 19:55:55.619257   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | private KVM network mk-old-k8s-version-919612 192.168.72.0/24 created
	I0429 19:55:55.619304   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:55.619246   63060 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:55:55.619329   62888 main.go:141] libmachine: (old-k8s-version-919612) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612 ...
	I0429 19:55:55.619340   62888 main.go:141] libmachine: (old-k8s-version-919612) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 19:55:55.619454   62888 main.go:141] libmachine: (old-k8s-version-919612) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 19:55:55.889336   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:55.889203   63060 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa...
	I0429 19:55:55.941195   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:55.941065   63060 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/old-k8s-version-919612.rawdisk...
	I0429 19:55:55.941231   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Writing magic tar header
	I0429 19:55:55.941270   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Writing SSH key tar header
	I0429 19:55:55.941318   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:55.941198   63060 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612 ...
	I0429 19:55:55.941356   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612
	I0429 19:55:55.941422   62888 main.go:141] libmachine: (old-k8s-version-919612) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612 (perms=drwx------)
	I0429 19:55:55.941450   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 19:55:55.941466   62888 main.go:141] libmachine: (old-k8s-version-919612) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 19:55:55.941490   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:55:55.941506   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 19:55:55.941522   62888 main.go:141] libmachine: (old-k8s-version-919612) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 19:55:55.941539   62888 main.go:141] libmachine: (old-k8s-version-919612) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 19:55:55.941551   62888 main.go:141] libmachine: (old-k8s-version-919612) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 19:55:55.941564   62888 main.go:141] libmachine: (old-k8s-version-919612) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 19:55:55.941575   62888 main.go:141] libmachine: (old-k8s-version-919612) Creating domain...
	I0429 19:55:55.941590   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 19:55:55.941602   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Checking permissions on dir: /home/jenkins
	I0429 19:55:55.941616   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Checking permissions on dir: /home
	I0429 19:55:55.941628   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Skipping /home - not owner
	I0429 19:55:55.942693   62888 main.go:141] libmachine: (old-k8s-version-919612) define libvirt domain using xml: 
	I0429 19:55:55.942715   62888 main.go:141] libmachine: (old-k8s-version-919612) <domain type='kvm'>
	I0429 19:55:55.942726   62888 main.go:141] libmachine: (old-k8s-version-919612)   <name>old-k8s-version-919612</name>
	I0429 19:55:55.942734   62888 main.go:141] libmachine: (old-k8s-version-919612)   <memory unit='MiB'>2200</memory>
	I0429 19:55:55.942744   62888 main.go:141] libmachine: (old-k8s-version-919612)   <vcpu>2</vcpu>
	I0429 19:55:55.942752   62888 main.go:141] libmachine: (old-k8s-version-919612)   <features>
	I0429 19:55:55.942764   62888 main.go:141] libmachine: (old-k8s-version-919612)     <acpi/>
	I0429 19:55:55.942780   62888 main.go:141] libmachine: (old-k8s-version-919612)     <apic/>
	I0429 19:55:55.942793   62888 main.go:141] libmachine: (old-k8s-version-919612)     <pae/>
	I0429 19:55:55.942804   62888 main.go:141] libmachine: (old-k8s-version-919612)     
	I0429 19:55:55.942817   62888 main.go:141] libmachine: (old-k8s-version-919612)   </features>
	I0429 19:55:55.942842   62888 main.go:141] libmachine: (old-k8s-version-919612)   <cpu mode='host-passthrough'>
	I0429 19:55:55.942871   62888 main.go:141] libmachine: (old-k8s-version-919612)   
	I0429 19:55:55.942890   62888 main.go:141] libmachine: (old-k8s-version-919612)   </cpu>
	I0429 19:55:55.942903   62888 main.go:141] libmachine: (old-k8s-version-919612)   <os>
	I0429 19:55:55.942912   62888 main.go:141] libmachine: (old-k8s-version-919612)     <type>hvm</type>
	I0429 19:55:55.942922   62888 main.go:141] libmachine: (old-k8s-version-919612)     <boot dev='cdrom'/>
	I0429 19:55:55.942932   62888 main.go:141] libmachine: (old-k8s-version-919612)     <boot dev='hd'/>
	I0429 19:55:55.942946   62888 main.go:141] libmachine: (old-k8s-version-919612)     <bootmenu enable='no'/>
	I0429 19:55:55.942955   62888 main.go:141] libmachine: (old-k8s-version-919612)   </os>
	I0429 19:55:55.942974   62888 main.go:141] libmachine: (old-k8s-version-919612)   <devices>
	I0429 19:55:55.942989   62888 main.go:141] libmachine: (old-k8s-version-919612)     <disk type='file' device='cdrom'>
	I0429 19:55:55.943033   62888 main.go:141] libmachine: (old-k8s-version-919612)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/boot2docker.iso'/>
	I0429 19:55:55.943070   62888 main.go:141] libmachine: (old-k8s-version-919612)       <target dev='hdc' bus='scsi'/>
	I0429 19:55:55.943085   62888 main.go:141] libmachine: (old-k8s-version-919612)       <readonly/>
	I0429 19:55:55.943095   62888 main.go:141] libmachine: (old-k8s-version-919612)     </disk>
	I0429 19:55:55.943113   62888 main.go:141] libmachine: (old-k8s-version-919612)     <disk type='file' device='disk'>
	I0429 19:55:55.943144   62888 main.go:141] libmachine: (old-k8s-version-919612)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 19:55:55.943165   62888 main.go:141] libmachine: (old-k8s-version-919612)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/old-k8s-version-919612.rawdisk'/>
	I0429 19:55:55.943177   62888 main.go:141] libmachine: (old-k8s-version-919612)       <target dev='hda' bus='virtio'/>
	I0429 19:55:55.943189   62888 main.go:141] libmachine: (old-k8s-version-919612)     </disk>
	I0429 19:55:55.943202   62888 main.go:141] libmachine: (old-k8s-version-919612)     <interface type='network'>
	I0429 19:55:55.943215   62888 main.go:141] libmachine: (old-k8s-version-919612)       <source network='mk-old-k8s-version-919612'/>
	I0429 19:55:55.943224   62888 main.go:141] libmachine: (old-k8s-version-919612)       <model type='virtio'/>
	I0429 19:55:55.943248   62888 main.go:141] libmachine: (old-k8s-version-919612)     </interface>
	I0429 19:55:55.943261   62888 main.go:141] libmachine: (old-k8s-version-919612)     <interface type='network'>
	I0429 19:55:55.943274   62888 main.go:141] libmachine: (old-k8s-version-919612)       <source network='default'/>
	I0429 19:55:55.943288   62888 main.go:141] libmachine: (old-k8s-version-919612)       <model type='virtio'/>
	I0429 19:55:55.943302   62888 main.go:141] libmachine: (old-k8s-version-919612)     </interface>
	I0429 19:55:55.943312   62888 main.go:141] libmachine: (old-k8s-version-919612)     <serial type='pty'>
	I0429 19:55:55.943325   62888 main.go:141] libmachine: (old-k8s-version-919612)       <target port='0'/>
	I0429 19:55:55.943336   62888 main.go:141] libmachine: (old-k8s-version-919612)     </serial>
	I0429 19:55:55.943349   62888 main.go:141] libmachine: (old-k8s-version-919612)     <console type='pty'>
	I0429 19:55:55.943383   62888 main.go:141] libmachine: (old-k8s-version-919612)       <target type='serial' port='0'/>
	I0429 19:55:55.943395   62888 main.go:141] libmachine: (old-k8s-version-919612)     </console>
	I0429 19:55:55.943407   62888 main.go:141] libmachine: (old-k8s-version-919612)     <rng model='virtio'>
	I0429 19:55:55.943421   62888 main.go:141] libmachine: (old-k8s-version-919612)       <backend model='random'>/dev/random</backend>
	I0429 19:55:55.943432   62888 main.go:141] libmachine: (old-k8s-version-919612)     </rng>
	I0429 19:55:55.943444   62888 main.go:141] libmachine: (old-k8s-version-919612)     
	I0429 19:55:55.943459   62888 main.go:141] libmachine: (old-k8s-version-919612)     
	I0429 19:55:55.943471   62888 main.go:141] libmachine: (old-k8s-version-919612)   </devices>
	I0429 19:55:55.943479   62888 main.go:141] libmachine: (old-k8s-version-919612) </domain>
	I0429 19:55:55.943494   62888 main.go:141] libmachine: (old-k8s-version-919612) 
	I0429 19:55:55.948270   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:fa:ca:94 in network default
	I0429 19:55:55.948904   62888 main.go:141] libmachine: (old-k8s-version-919612) Ensuring networks are active...
	I0429 19:55:55.948934   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:55:55.949846   62888 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network default is active
	I0429 19:55:55.950299   62888 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network mk-old-k8s-version-919612 is active
	I0429 19:55:55.951107   62888 main.go:141] libmachine: (old-k8s-version-919612) Getting domain xml...
	I0429 19:55:55.951978   62888 main.go:141] libmachine: (old-k8s-version-919612) Creating domain...
	I0429 19:55:57.387899   62888 main.go:141] libmachine: (old-k8s-version-919612) Waiting to get IP...
	I0429 19:55:57.390403   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:55:57.390938   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:55:57.390966   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:57.390908   63060 retry.go:31] will retry after 280.300742ms: waiting for machine to come up
	I0429 19:55:57.672762   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:55:57.673391   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:55:57.673418   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:57.673358   63060 retry.go:31] will retry after 291.215164ms: waiting for machine to come up
	I0429 19:55:57.966096   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:55:57.966641   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:55:57.966665   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:57.966595   63060 retry.go:31] will retry after 402.673306ms: waiting for machine to come up
	I0429 19:55:58.375265   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:55:58.375848   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:55:58.375883   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:58.375766   63060 retry.go:31] will retry after 490.470188ms: waiting for machine to come up
	I0429 19:55:59.306828   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:55:59.310629   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:55:59.310654   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:59.310577   63060 retry.go:31] will retry after 507.284008ms: waiting for machine to come up
	I0429 19:55:59.819194   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:55:59.820459   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:55:59.820543   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:55:59.820382   63060 retry.go:31] will retry after 697.044073ms: waiting for machine to come up
	I0429 19:56:00.519516   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:00.520079   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:56:00.520112   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:56:00.520024   63060 retry.go:31] will retry after 1.102679954s: waiting for machine to come up
	I0429 19:56:01.624838   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:01.625440   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:56:01.625476   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:56:01.625383   63060 retry.go:31] will retry after 1.417552686s: waiting for machine to come up
	I0429 19:56:03.045209   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:03.045663   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:56:03.045694   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:56:03.045617   63060 retry.go:31] will retry after 1.791344377s: waiting for machine to come up
	I0429 19:56:04.839108   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:04.839574   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:56:04.839597   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:56:04.839531   63060 retry.go:31] will retry after 2.243676251s: waiting for machine to come up
	I0429 19:56:07.085481   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:07.086146   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:56:07.086181   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:56:07.086092   63060 retry.go:31] will retry after 2.237183655s: waiting for machine to come up
	I0429 19:56:09.325646   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:09.326165   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:56:09.326195   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:56:09.326146   63060 retry.go:31] will retry after 3.104706047s: waiting for machine to come up
	I0429 19:56:12.432166   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:12.432639   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:56:12.432661   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:56:12.432601   63060 retry.go:31] will retry after 4.512994972s: waiting for machine to come up
	I0429 19:56:16.947076   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:16.947563   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 19:56:16.947583   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 19:56:16.947528   63060 retry.go:31] will retry after 5.09713301s: waiting for machine to come up
	I0429 19:56:22.046159   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:22.046823   62888 main.go:141] libmachine: (old-k8s-version-919612) Found IP for machine: 192.168.72.240
	I0429 19:56:22.046872   62888 main.go:141] libmachine: (old-k8s-version-919612) Reserving static IP address...
	I0429 19:56:22.046888   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has current primary IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:22.047269   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"} in network mk-old-k8s-version-919612
	I0429 19:56:22.125804   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Getting to WaitForSSH function...
	I0429 19:56:22.125827   62888 main.go:141] libmachine: (old-k8s-version-919612) Reserved static IP address: 192.168.72.240
	I0429 19:56:22.125869   62888 main.go:141] libmachine: (old-k8s-version-919612) Waiting for SSH to be available...
	I0429 19:56:22.128650   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:22.128992   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612
	I0429 19:56:22.129023   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find defined IP address of network mk-old-k8s-version-919612 interface with MAC address 52:54:00:62:23:ed
	I0429 19:56:22.129131   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH client type: external
	I0429 19:56:22.129158   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa (-rw-------)
	I0429 19:56:22.129206   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:56:22.129234   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | About to run SSH command:
	I0429 19:56:22.129275   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | exit 0
	I0429 19:56:22.132839   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | SSH cmd err, output: exit status 255: 
	I0429 19:56:22.132861   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0429 19:56:22.132869   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | command : exit 0
	I0429 19:56:22.132877   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | err     : exit status 255
	I0429 19:56:22.132884   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | output  : 
	I0429 19:56:25.133065   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Getting to WaitForSSH function...
	I0429 19:56:25.135653   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.136026   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:25.136059   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.136229   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH client type: external
	I0429 19:56:25.136264   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa (-rw-------)
	I0429 19:56:25.136289   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 19:56:25.136299   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | About to run SSH command:
	I0429 19:56:25.136309   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | exit 0
	I0429 19:56:25.262624   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | SSH cmd err, output: <nil>: 
	I0429 19:56:25.262917   62888 main.go:141] libmachine: (old-k8s-version-919612) KVM machine creation complete!
	I0429 19:56:25.263181   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetConfigRaw
	I0429 19:56:25.263720   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 19:56:25.263914   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 19:56:25.264131   62888 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 19:56:25.264147   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetState
	I0429 19:56:25.265396   62888 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 19:56:25.265409   62888 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 19:56:25.265414   62888 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 19:56:25.265421   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 19:56:25.267599   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.267868   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:25.267894   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.267999   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 19:56:25.268163   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:25.268312   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:25.268437   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 19:56:25.268642   62888 main.go:141] libmachine: Using SSH client type: native
	I0429 19:56:25.268832   62888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 19:56:25.268843   62888 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 19:56:25.377802   62888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:56:25.377829   62888 main.go:141] libmachine: Detecting the provisioner...
	I0429 19:56:25.377839   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 19:56:25.380587   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.380879   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:25.380907   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.381184   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 19:56:25.381422   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:25.381607   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:25.381803   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 19:56:25.381968   62888 main.go:141] libmachine: Using SSH client type: native
	I0429 19:56:25.382165   62888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 19:56:25.382183   62888 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 19:56:25.491521   62888 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 19:56:25.491604   62888 main.go:141] libmachine: found compatible host: buildroot
	I0429 19:56:25.491619   62888 main.go:141] libmachine: Provisioning with buildroot...
	I0429 19:56:25.491634   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 19:56:25.491888   62888 buildroot.go:166] provisioning hostname "old-k8s-version-919612"
	I0429 19:56:25.491920   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 19:56:25.492127   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 19:56:25.494890   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.495264   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:25.495292   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.495462   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 19:56:25.495667   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:25.495837   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:25.495957   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 19:56:25.496114   62888 main.go:141] libmachine: Using SSH client type: native
	I0429 19:56:25.496309   62888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 19:56:25.496325   62888 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-919612 && echo "old-k8s-version-919612" | sudo tee /etc/hostname
	I0429 19:56:25.623635   62888 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-919612
	
	I0429 19:56:25.623660   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 19:56:25.626547   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.626947   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:25.626981   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.627167   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 19:56:25.627351   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:25.627536   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:25.627670   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 19:56:25.627824   62888 main.go:141] libmachine: Using SSH client type: native
	I0429 19:56:25.627985   62888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 19:56:25.628002   62888 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-919612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-919612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-919612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 19:56:25.750618   62888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 19:56:25.750658   62888 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 19:56:25.750692   62888 buildroot.go:174] setting up certificates
	I0429 19:56:25.750701   62888 provision.go:84] configureAuth start
	I0429 19:56:25.750710   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 19:56:25.751003   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 19:56:25.753798   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.754309   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:25.754334   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.754539   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 19:56:25.757195   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.757537   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:25.757562   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.757727   62888 provision.go:143] copyHostCerts
	I0429 19:56:25.757780   62888 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 19:56:25.757790   62888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 19:56:25.757843   62888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 19:56:25.757941   62888 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 19:56:25.757951   62888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 19:56:25.757984   62888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 19:56:25.758126   62888 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 19:56:25.758138   62888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 19:56:25.758176   62888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 19:56:25.758263   62888 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-919612 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-919612]
	I0429 19:56:25.962146   62888 provision.go:177] copyRemoteCerts
	I0429 19:56:25.962251   62888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 19:56:25.962290   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 19:56:25.965319   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.965711   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:25.965739   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:25.966009   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 19:56:25.966212   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:25.966402   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 19:56:25.966536   62888 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 19:56:26.053624   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 19:56:26.082358   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0429 19:56:26.109913   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 19:56:26.138690   62888 provision.go:87] duration metric: took 387.978089ms to configureAuth
	I0429 19:56:26.138719   62888 buildroot.go:189] setting minikube options for container-runtime
	I0429 19:56:26.138906   62888 config.go:182] Loaded profile config "old-k8s-version-919612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 19:56:26.138976   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 19:56:26.141472   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.141830   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:26.141862   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.142008   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 19:56:26.142256   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:26.142432   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:26.142604   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 19:56:26.142789   62888 main.go:141] libmachine: Using SSH client type: native
	I0429 19:56:26.142967   62888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 19:56:26.142989   62888 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 19:56:26.627617   62888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 19:56:26.627652   62888 main.go:141] libmachine: Checking connection to Docker...
	I0429 19:56:26.627660   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetURL
	I0429 19:56:26.628908   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using libvirt version 6000000
	I0429 19:56:26.630904   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.631210   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:26.631232   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.631444   62888 main.go:141] libmachine: Docker is up and running!
	I0429 19:56:26.631461   62888 main.go:141] libmachine: Reticulating splines...
	I0429 19:56:26.631485   62888 client.go:171] duration metric: took 31.105029903s to LocalClient.Create
	I0429 19:56:26.631515   62888 start.go:167] duration metric: took 31.105148949s to libmachine.API.Create "old-k8s-version-919612"
	I0429 19:56:26.631527   62888 start.go:293] postStartSetup for "old-k8s-version-919612" (driver="kvm2")
	I0429 19:56:26.631540   62888 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 19:56:26.631556   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 19:56:26.631815   62888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 19:56:26.631847   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 19:56:26.634016   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.634543   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:26.634579   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.634716   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 19:56:26.634935   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:26.635107   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 19:56:26.635242   62888 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 19:56:26.721699   62888 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 19:56:26.726944   62888 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 19:56:26.726977   62888 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 19:56:26.727063   62888 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 19:56:26.727134   62888 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 19:56:26.727218   62888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 19:56:26.738223   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:56:26.766896   62888 start.go:296] duration metric: took 135.354022ms for postStartSetup
	I0429 19:56:26.766944   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetConfigRaw
	I0429 19:56:26.767604   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 19:56:26.770213   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.770560   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:26.770591   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.770788   62888 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json ...
	I0429 19:56:26.770970   62888 start.go:128] duration metric: took 31.267018682s to createHost
	I0429 19:56:26.770989   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 19:56:26.773125   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.773422   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:26.773450   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.773561   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 19:56:26.773746   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:26.773898   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:26.774094   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 19:56:26.774329   62888 main.go:141] libmachine: Using SSH client type: native
	I0429 19:56:26.774517   62888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 19:56:26.774531   62888 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 19:56:26.883196   62888 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714420586.872512332
	
	I0429 19:56:26.883224   62888 fix.go:216] guest clock: 1714420586.872512332
	I0429 19:56:26.883233   62888 fix.go:229] Guest: 2024-04-29 19:56:26.872512332 +0000 UTC Remote: 2024-04-29 19:56:26.770981511 +0000 UTC m=+45.442605977 (delta=101.530821ms)
	I0429 19:56:26.883280   62888 fix.go:200] guest clock delta is within tolerance: 101.530821ms
	I0429 19:56:26.883290   62888 start.go:83] releasing machines lock for "old-k8s-version-919612", held for 31.379544753s
	I0429 19:56:26.883319   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 19:56:26.883568   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 19:56:26.886403   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.886825   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:26.886847   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.886933   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 19:56:26.887392   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 19:56:26.887589   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 19:56:26.887709   62888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 19:56:26.887747   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 19:56:26.887847   62888 ssh_runner.go:195] Run: cat /version.json
	I0429 19:56:26.887875   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 19:56:26.890373   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.890724   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.890770   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:26.890801   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.890889   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 19:56:26.891056   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:26.891108   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:26.891185   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:26.891210   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 19:56:26.891293   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 19:56:26.891399   62888 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 19:56:26.891465   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 19:56:26.891775   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 19:56:26.891964   62888 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 19:56:26.983911   62888 ssh_runner.go:195] Run: systemctl --version
	I0429 19:56:27.008945   62888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 19:56:27.180421   62888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 19:56:27.189045   62888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 19:56:27.189109   62888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 19:56:27.209126   62888 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 19:56:27.209150   62888 start.go:494] detecting cgroup driver to use...
	I0429 19:56:27.209222   62888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 19:56:27.234243   62888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 19:56:27.251269   62888 docker.go:217] disabling cri-docker service (if available) ...
	I0429 19:56:27.251331   62888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 19:56:27.269261   62888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 19:56:27.286977   62888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 19:56:27.433333   62888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 19:56:27.606087   62888 docker.go:233] disabling docker service ...
	I0429 19:56:27.606161   62888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 19:56:27.622921   62888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 19:56:27.637895   62888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 19:56:27.773576   62888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 19:56:27.907877   62888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 19:56:27.923816   62888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 19:56:27.945384   62888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 19:56:27.945444   62888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:56:27.962481   62888 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 19:56:27.962556   62888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:56:27.975116   62888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:56:27.989641   62888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 19:56:28.003330   62888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 19:56:28.016470   62888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 19:56:28.028465   62888 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 19:56:28.028529   62888 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 19:56:28.045760   62888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 19:56:28.058116   62888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:56:28.183872   62888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 19:56:28.342816   62888 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 19:56:28.342894   62888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 19:56:28.348719   62888 start.go:562] Will wait 60s for crictl version
	I0429 19:56:28.348779   62888 ssh_runner.go:195] Run: which crictl
	I0429 19:56:28.353302   62888 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 19:56:28.400093   62888 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 19:56:28.400242   62888 ssh_runner.go:195] Run: crio --version
	I0429 19:56:28.433168   62888 ssh_runner.go:195] Run: crio --version
	I0429 19:56:28.469665   62888 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0429 19:56:28.471177   62888 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 19:56:28.473933   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:28.474407   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 20:56:12 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 19:56:28.474449   62888 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 19:56:28.474552   62888 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0429 19:56:28.479800   62888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:56:28.494616   62888 kubeadm.go:877] updating cluster {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 19:56:28.494749   62888 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 19:56:28.494816   62888 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:56:28.531977   62888 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 19:56:28.532078   62888 ssh_runner.go:195] Run: which lz4
	I0429 19:56:28.537024   62888 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0429 19:56:28.544977   62888 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 19:56:28.545031   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0429 19:56:30.693374   62888 crio.go:462] duration metric: took 2.156405851s to copy over tarball
	I0429 19:56:30.693469   62888 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 19:56:33.667061   62888 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.97356499s)
	I0429 19:56:33.667088   62888 crio.go:469] duration metric: took 2.973687952s to extract the tarball
	I0429 19:56:33.667099   62888 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 19:56:33.724388   62888 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 19:56:33.777190   62888 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 19:56:33.777227   62888 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 19:56:33.777307   62888 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:56:33.777330   62888 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0429 19:56:33.777342   62888 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0429 19:56:33.777347   62888 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 19:56:33.777319   62888 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 19:56:33.777389   62888 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 19:56:33.777607   62888 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0429 19:56:33.777688   62888 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 19:56:33.779132   62888 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 19:56:33.779165   62888 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 19:56:33.779178   62888 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0429 19:56:33.779134   62888 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0429 19:56:33.779223   62888 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:56:33.779238   62888 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 19:56:33.779282   62888 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 19:56:33.779469   62888 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0429 19:56:33.925148   62888 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0429 19:56:33.951644   62888 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 19:56:33.972754   62888 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0429 19:56:33.972821   62888 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0429 19:56:33.972877   62888 ssh_runner.go:195] Run: which crictl
	I0429 19:56:34.007093   62888 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0429 19:56:34.010555   62888 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0429 19:56:34.017884   62888 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0429 19:56:34.017931   62888 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 19:56:34.017974   62888 ssh_runner.go:195] Run: which crictl
	I0429 19:56:34.017979   62888 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0429 19:56:34.085903   62888 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0429 19:56:34.085954   62888 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0429 19:56:34.086009   62888 ssh_runner.go:195] Run: which crictl
	I0429 19:56:34.093425   62888 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 19:56:34.093511   62888 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0429 19:56:34.093555   62888 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0429 19:56:34.093598   62888 ssh_runner.go:195] Run: which crictl
	I0429 19:56:34.118110   62888 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0429 19:56:34.118241   62888 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0429 19:56:34.154353   62888 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0429 19:56:34.156613   62888 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0429 19:56:34.164264   62888 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0429 19:56:34.166785   62888 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0429 19:56:34.166881   62888 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0429 19:56:34.218451   62888 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0429 19:56:34.271019   62888 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0429 19:56:34.271066   62888 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 19:56:34.271119   62888 ssh_runner.go:195] Run: which crictl
	I0429 19:56:34.305237   62888 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0429 19:56:34.305284   62888 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 19:56:34.305334   62888 ssh_runner.go:195] Run: which crictl
	I0429 19:56:34.319523   62888 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0429 19:56:34.319526   62888 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0429 19:56:34.319586   62888 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 19:56:34.319617   62888 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0429 19:56:34.319621   62888 ssh_runner.go:195] Run: which crictl
	I0429 19:56:34.319622   62888 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0429 19:56:34.379647   62888 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0429 19:56:34.379673   62888 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0429 19:56:34.379725   62888 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0429 19:56:34.427665   62888 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0429 19:56:34.687909   62888 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 19:56:34.838461   62888 cache_images.go:92] duration metric: took 1.061213114s to LoadCachedImages
	W0429 19:56:34.838579   62888 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0429 19:56:34.838598   62888 kubeadm.go:928] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I0429 19:56:34.838753   62888 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-919612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 19:56:34.838859   62888 ssh_runner.go:195] Run: crio config
	I0429 19:56:34.897809   62888 cni.go:84] Creating CNI manager for ""
	I0429 19:56:34.897838   62888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 19:56:34.897852   62888 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 19:56:34.897882   62888 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-919612 NodeName:old-k8s-version-919612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 19:56:34.898132   62888 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-919612"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 19:56:34.898218   62888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 19:56:34.909400   62888 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 19:56:34.909483   62888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 19:56:34.920184   62888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0429 19:56:34.940422   62888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 19:56:34.963221   62888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0429 19:56:34.986235   62888 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I0429 19:56:34.991751   62888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 19:56:35.007615   62888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 19:56:35.145495   62888 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 19:56:35.165180   62888 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612 for IP: 192.168.72.240
	I0429 19:56:35.165208   62888 certs.go:194] generating shared ca certs ...
	I0429 19:56:35.165228   62888 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:56:35.165404   62888 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 19:56:35.165458   62888 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 19:56:35.165475   62888 certs.go:256] generating profile certs ...
	I0429 19:56:35.165542   62888 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.key
	I0429 19:56:35.165556   62888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt with IP's: []
	I0429 19:56:35.347880   62888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt ...
	I0429 19:56:35.347912   62888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: {Name:mk6442ec8b17aa42dbe10dd46dbdcc34fee9c27a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:56:35.348121   62888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.key ...
	I0429 19:56:35.348140   62888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.key: {Name:mka0aa621ced8f0904786c7ee296415068ba02b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:56:35.348264   62888 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key.5df5e618
	I0429 19:56:35.348288   62888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt.5df5e618 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.240]
	I0429 19:56:35.573297   62888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt.5df5e618 ...
	I0429 19:56:35.573329   62888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt.5df5e618: {Name:mk924ebe195f9c74d04386f1ee0ee36cc84e70db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:56:35.573513   62888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key.5df5e618 ...
	I0429 19:56:35.573530   62888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key.5df5e618: {Name:mk1d27f77af878d3f902e23fa9ec1445d6a0161c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:56:35.573630   62888 certs.go:381] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt.5df5e618 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt
	I0429 19:56:35.573724   62888 certs.go:385] copying /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key.5df5e618 -> /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key
	I0429 19:56:35.573802   62888 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key
	I0429 19:56:35.573821   62888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.crt with IP's: []
	I0429 19:56:35.736196   62888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.crt ...
	I0429 19:56:35.736224   62888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.crt: {Name:mke61a94e429e76eea3ec51f1094e24d62430950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:56:35.736399   62888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key ...
	I0429 19:56:35.736416   62888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key: {Name:mk346b12853a4171cdb3531b30de5c1498a7f1e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 19:56:35.736579   62888 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 19:56:35.736620   62888 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 19:56:35.736630   62888 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 19:56:35.736653   62888 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 19:56:35.736674   62888 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 19:56:35.736696   62888 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 19:56:35.736730   62888 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 19:56:35.737374   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 19:56:35.768632   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 19:56:35.803895   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 19:56:35.838928   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 19:56:35.876292   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 19:56:35.912520   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 19:56:35.942175   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 19:56:35.974567   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 19:56:36.008157   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 19:56:36.049185   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 19:56:36.086352   62888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 19:56:36.119486   62888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 19:56:36.140709   62888 ssh_runner.go:195] Run: openssl version
	I0429 19:56:36.147731   62888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 19:56:36.161251   62888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:56:36.167024   62888 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:56:36.167095   62888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 19:56:36.174106   62888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 19:56:36.186614   62888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 19:56:36.198971   62888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 19:56:36.204235   62888 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 19:56:36.204303   62888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 19:56:36.210996   62888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 19:56:36.223145   62888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 19:56:36.234921   62888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 19:56:36.239993   62888 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 19:56:36.240050   62888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 19:56:36.247065   62888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 19:56:36.260335   62888 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 19:56:36.265396   62888 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 19:56:36.265463   62888 kubeadm.go:391] StartCluster: {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 19:56:36.265565   62888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 19:56:36.265613   62888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 19:56:36.308661   62888 cri.go:89] found id: ""
	I0429 19:56:36.308735   62888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 19:56:36.319640   62888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 19:56:36.331438   62888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 19:56:36.341926   62888 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 19:56:36.341944   62888 kubeadm.go:156] found existing configuration files:
	
	I0429 19:56:36.341981   62888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 19:56:36.352014   62888 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 19:56:36.352066   62888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 19:56:36.363780   62888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 19:56:36.374677   62888 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 19:56:36.374760   62888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 19:56:36.386966   62888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 19:56:36.398328   62888 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 19:56:36.398402   62888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 19:56:36.409893   62888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 19:56:36.420637   62888 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 19:56:36.420705   62888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 19:56:36.432139   62888 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 19:56:36.550718   62888 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 19:56:36.550795   62888 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 19:56:36.728981   62888 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 19:56:36.729111   62888 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 19:56:36.729242   62888 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 19:56:36.985527   62888 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 19:56:36.987330   62888 out.go:204]   - Generating certificates and keys ...
	I0429 19:56:36.987427   62888 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 19:56:36.987521   62888 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 19:56:37.118858   62888 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 19:56:37.333089   62888 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 19:56:37.496173   62888 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 19:56:37.641895   62888 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 19:56:37.873143   62888 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 19:56:37.873528   62888 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-919612] and IPs [192.168.72.240 127.0.0.1 ::1]
	I0429 19:56:38.271697   62888 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 19:56:38.275435   62888 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-919612] and IPs [192.168.72.240 127.0.0.1 ::1]
	I0429 19:56:38.660743   62888 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 19:56:39.185264   62888 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 19:56:39.473940   62888 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 19:56:39.474229   62888 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 19:56:39.859551   62888 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 19:56:40.022392   62888 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 19:56:40.836485   62888 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 19:56:41.102490   62888 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 19:56:41.121881   62888 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 19:56:41.123300   62888 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 19:56:41.123392   62888 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 19:56:41.286773   62888 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 19:56:41.288542   62888 out.go:204]   - Booting up control plane ...
	I0429 19:56:41.288652   62888 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 19:56:41.301600   62888 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 19:56:41.304419   62888 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 19:56:41.304492   62888 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 19:56:41.308498   62888 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 19:57:21.307942   62888 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 19:57:21.308668   62888 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:57:21.309037   62888 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:57:26.309655   62888 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:57:26.310111   62888 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:57:36.311240   62888 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:57:36.311522   62888 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:57:56.312127   62888 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:57:56.312384   62888 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:58:36.313502   62888 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 19:58:36.313785   62888 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 19:58:36.313799   62888 kubeadm.go:309] 
	I0429 19:58:36.313859   62888 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 19:58:36.313917   62888 kubeadm.go:309] 		timed out waiting for the condition
	I0429 19:58:36.313963   62888 kubeadm.go:309] 
	I0429 19:58:36.314030   62888 kubeadm.go:309] 	This error is likely caused by:
	I0429 19:58:36.314101   62888 kubeadm.go:309] 		- The kubelet is not running
	I0429 19:58:36.314282   62888 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 19:58:36.314302   62888 kubeadm.go:309] 
	I0429 19:58:36.314432   62888 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 19:58:36.314491   62888 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 19:58:36.314534   62888 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 19:58:36.314563   62888 kubeadm.go:309] 
	I0429 19:58:36.314762   62888 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 19:58:36.314909   62888 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 19:58:36.314924   62888 kubeadm.go:309] 
	I0429 19:58:36.315082   62888 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 19:58:36.315219   62888 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 19:58:36.315331   62888 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 19:58:36.315428   62888 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 19:58:36.315442   62888 kubeadm.go:309] 
	I0429 19:58:36.315832   62888 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 19:58:36.315961   62888 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 19:58:36.316059   62888 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0429 19:58:36.316251   62888 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-919612] and IPs [192.168.72.240 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-919612] and IPs [192.168.72.240 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-919612] and IPs [192.168.72.240 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-919612] and IPs [192.168.72.240 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0429 19:58:36.316326   62888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 19:58:38.332099   62888 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.015744446s)
	I0429 19:58:38.332172   62888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:58:38.347419   62888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 19:58:38.358162   62888 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 19:58:38.358187   62888 kubeadm.go:156] found existing configuration files:
	
	I0429 19:58:38.358239   62888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 19:58:38.368050   62888 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 19:58:38.368109   62888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 19:58:38.378639   62888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 19:58:38.391657   62888 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 19:58:38.391735   62888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 19:58:38.403785   62888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 19:58:38.415871   62888 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 19:58:38.415941   62888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 19:58:38.427625   62888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 19:58:38.438277   62888 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 19:58:38.438352   62888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 19:58:38.448795   62888 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 19:58:38.684541   62888 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:00:34.672961   62888 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:00:34.673107   62888 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:00:34.674891   62888 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:00:34.674956   62888 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:00:34.675025   62888 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:00:34.675105   62888 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:00:34.675279   62888 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:00:34.675377   62888 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:00:34.677183   62888 out.go:204]   - Generating certificates and keys ...
	I0429 20:00:34.677243   62888 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:00:34.677300   62888 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:00:34.677375   62888 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:00:34.677431   62888 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:00:34.677497   62888 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:00:34.677588   62888 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:00:34.677686   62888 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:00:34.677786   62888 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:00:34.677900   62888 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:00:34.678015   62888 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:00:34.678084   62888 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:00:34.678154   62888 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:00:34.678217   62888 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:00:34.678282   62888 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:00:34.678367   62888 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:00:34.678434   62888 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:00:34.678561   62888 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:00:34.678663   62888 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:00:34.678712   62888 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:00:34.678793   62888 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:00:34.680280   62888 out.go:204]   - Booting up control plane ...
	I0429 20:00:34.680414   62888 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:00:34.680518   62888 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:00:34.680610   62888 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:00:34.680740   62888 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:00:34.680964   62888 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:00:34.681018   62888 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:00:34.681099   62888 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:00:34.681313   62888 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:00:34.681394   62888 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:00:34.681646   62888 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:00:34.681726   62888 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:00:34.681946   62888 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:00:34.682025   62888 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:00:34.682294   62888 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:00:34.682417   62888 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:00:34.682612   62888 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:00:34.682624   62888 kubeadm.go:309] 
	I0429 20:00:34.682685   62888 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:00:34.682750   62888 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:00:34.682761   62888 kubeadm.go:309] 
	I0429 20:00:34.682817   62888 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:00:34.682861   62888 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:00:34.683000   62888 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:00:34.683009   62888 kubeadm.go:309] 
	I0429 20:00:34.683156   62888 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:00:34.683220   62888 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:00:34.683275   62888 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:00:34.683284   62888 kubeadm.go:309] 
	I0429 20:00:34.683430   62888 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:00:34.683503   62888 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:00:34.683509   62888 kubeadm.go:309] 
	I0429 20:00:34.683651   62888 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:00:34.683767   62888 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:00:34.683833   62888 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:00:34.683900   62888 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:00:34.683937   62888 kubeadm.go:309] 
	I0429 20:00:34.683958   62888 kubeadm.go:393] duration metric: took 3m58.418500346s to StartCluster
	I0429 20:00:34.683992   62888 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:00:34.684038   62888 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:00:34.748527   62888 cri.go:89] found id: ""
	I0429 20:00:34.748556   62888 logs.go:276] 0 containers: []
	W0429 20:00:34.748566   62888 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:00:34.748574   62888 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:00:34.748627   62888 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:00:34.791806   62888 cri.go:89] found id: ""
	I0429 20:00:34.791833   62888 logs.go:276] 0 containers: []
	W0429 20:00:34.791844   62888 logs.go:278] No container was found matching "etcd"
	I0429 20:00:34.791851   62888 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:00:34.791920   62888 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:00:34.829684   62888 cri.go:89] found id: ""
	I0429 20:00:34.829719   62888 logs.go:276] 0 containers: []
	W0429 20:00:34.829729   62888 logs.go:278] No container was found matching "coredns"
	I0429 20:00:34.829736   62888 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:00:34.829812   62888 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:00:34.871750   62888 cri.go:89] found id: ""
	I0429 20:00:34.871774   62888 logs.go:276] 0 containers: []
	W0429 20:00:34.871782   62888 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:00:34.871787   62888 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:00:34.871853   62888 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:00:34.910948   62888 cri.go:89] found id: ""
	I0429 20:00:34.910984   62888 logs.go:276] 0 containers: []
	W0429 20:00:34.910995   62888 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:00:34.911003   62888 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:00:34.911065   62888 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:00:34.951454   62888 cri.go:89] found id: ""
	I0429 20:00:34.951487   62888 logs.go:276] 0 containers: []
	W0429 20:00:34.951498   62888 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:00:34.951506   62888 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:00:34.951571   62888 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:00:34.990456   62888 cri.go:89] found id: ""
	I0429 20:00:34.990493   62888 logs.go:276] 0 containers: []
	W0429 20:00:34.990505   62888 logs.go:278] No container was found matching "kindnet"
	I0429 20:00:34.990517   62888 logs.go:123] Gathering logs for kubelet ...
	I0429 20:00:34.990533   62888 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:00:35.044223   62888 logs.go:123] Gathering logs for dmesg ...
	I0429 20:00:35.044262   62888 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:00:35.059140   62888 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:00:35.059171   62888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:00:35.191582   62888 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:00:35.191614   62888 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:00:35.191628   62888 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:00:35.283768   62888 logs.go:123] Gathering logs for container status ...
	I0429 20:00:35.283809   62888 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0429 20:00:35.327735   62888 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0429 20:00:35.327797   62888 out.go:239] * 
	* 
	W0429 20:00:35.327863   62888 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:00:35.327899   62888 out.go:239] * 
	* 
	W0429 20:00:35.328787   62888 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:00:35.331820   62888 out.go:177] 
	W0429 20:00:35.333145   62888 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:00:35.333192   62888 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0429 20:00:35.333219   62888 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0429 20:00:35.334549   62888 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-919612 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 6 (245.258265ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:00:35.628232   65804 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-919612" does not appear in /home/jenkins/minikube-integration/18774-7754/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-919612" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (294.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-161370 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-161370 --alsologtostderr -v=3: exit status 82 (2m0.981606256s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-161370"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:58:05.816912   64511 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:58:05.817193   64511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:58:05.817207   64511 out.go:304] Setting ErrFile to fd 2...
	I0429 19:58:05.817213   64511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:58:05.817425   64511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:58:05.817684   64511 out.go:298] Setting JSON to false
	I0429 19:58:05.817755   64511 mustload.go:65] Loading cluster: embed-certs-161370
	I0429 19:58:05.818129   64511 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:58:05.818224   64511 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/config.json ...
	I0429 19:58:05.818437   64511 mustload.go:65] Loading cluster: embed-certs-161370
	I0429 19:58:05.818547   64511 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:58:05.818571   64511 stop.go:39] StopHost: embed-certs-161370
	I0429 19:58:05.818949   64511 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:58:05.818984   64511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:58:05.835155   64511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0429 19:58:05.835624   64511 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:58:05.836248   64511 main.go:141] libmachine: Using API Version  1
	I0429 19:58:05.836278   64511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:58:05.836660   64511 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:58:05.839489   64511 out.go:177] * Stopping node "embed-certs-161370"  ...
	I0429 19:58:05.840936   64511 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 19:58:05.840985   64511 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 19:58:05.841232   64511 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 19:58:05.841256   64511 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 19:58:05.844849   64511 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 19:58:05.845337   64511 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 20:57:06 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 19:58:05.845369   64511 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 19:58:05.845567   64511 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 19:58:05.845753   64511 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 19:58:05.845942   64511 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 19:58:05.846090   64511 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 19:58:05.987293   64511 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 19:58:06.052885   64511 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 19:58:06.116969   64511 main.go:141] libmachine: Stopping "embed-certs-161370"...
	I0429 19:58:06.116996   64511 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 19:58:06.119004   64511 main.go:141] libmachine: (embed-certs-161370) Calling .Stop
	I0429 19:58:06.122549   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 0/120
	I0429 19:58:07.124572   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 1/120
	I0429 19:58:08.126245   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 2/120
	I0429 19:58:09.128500   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 3/120
	I0429 19:58:10.130059   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 4/120
	I0429 19:58:11.132241   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 5/120
	I0429 19:58:12.134319   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 6/120
	I0429 19:58:13.135696   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 7/120
	I0429 19:58:14.138165   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 8/120
	I0429 19:58:15.139713   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 9/120
	I0429 19:58:16.141676   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 10/120
	I0429 19:58:17.143189   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 11/120
	I0429 19:58:18.144487   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 12/120
	I0429 19:58:19.145925   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 13/120
	I0429 19:58:20.147306   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 14/120
	I0429 19:58:21.149113   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 15/120
	I0429 19:58:22.150121   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 16/120
	I0429 19:58:23.151761   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 17/120
	I0429 19:58:24.153154   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 18/120
	I0429 19:58:25.154656   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 19/120
	I0429 19:58:26.156638   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 20/120
	I0429 19:58:27.158044   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 21/120
	I0429 19:58:28.159585   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 22/120
	I0429 19:58:29.161010   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 23/120
	I0429 19:58:30.162728   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 24/120
	I0429 19:58:31.164695   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 25/120
	I0429 19:58:32.166252   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 26/120
	I0429 19:58:33.168694   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 27/120
	I0429 19:58:34.170107   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 28/120
	I0429 19:58:35.171497   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 29/120
	I0429 19:58:36.174173   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 30/120
	I0429 19:58:37.175387   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 31/120
	I0429 19:58:38.176918   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 32/120
	I0429 19:58:39.178367   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 33/120
	I0429 19:58:40.180529   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 34/120
	I0429 19:58:41.181976   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 35/120
	I0429 19:58:42.183413   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 36/120
	I0429 19:58:43.184823   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 37/120
	I0429 19:58:44.186270   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 38/120
	I0429 19:58:45.187779   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 39/120
	I0429 19:58:46.189723   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 40/120
	I0429 19:58:47.191124   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 41/120
	I0429 19:58:48.192537   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 42/120
	I0429 19:58:49.194136   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 43/120
	I0429 19:58:50.195827   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 44/120
	I0429 19:58:51.197838   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 45/120
	I0429 19:58:52.199385   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 46/120
	I0429 19:58:53.200755   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 47/120
	I0429 19:58:54.202539   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 48/120
	I0429 19:58:55.204542   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 49/120
	I0429 19:58:56.606798   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 50/120
	I0429 19:58:57.608881   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 51/120
	I0429 19:58:58.611020   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 52/120
	I0429 19:58:59.612528   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 53/120
	I0429 19:59:00.613806   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 54/120
	I0429 19:59:01.615614   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 55/120
	I0429 19:59:02.617164   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 56/120
	I0429 19:59:03.618589   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 57/120
	I0429 19:59:04.620204   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 58/120
	I0429 19:59:05.621916   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 59/120
	I0429 19:59:06.624353   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 60/120
	I0429 19:59:07.626027   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 61/120
	I0429 19:59:08.627802   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 62/120
	I0429 19:59:09.629588   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 63/120
	I0429 19:59:10.631178   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 64/120
	I0429 19:59:11.632755   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 65/120
	I0429 19:59:12.633994   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 66/120
	I0429 19:59:13.635649   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 67/120
	I0429 19:59:14.637245   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 68/120
	I0429 19:59:15.638882   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 69/120
	I0429 19:59:16.641070   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 70/120
	I0429 19:59:17.642418   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 71/120
	I0429 19:59:18.644587   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 72/120
	I0429 19:59:19.646011   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 73/120
	I0429 19:59:20.647512   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 74/120
	I0429 19:59:21.649660   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 75/120
	I0429 19:59:22.651137   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 76/120
	I0429 19:59:23.652638   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 77/120
	I0429 19:59:24.654537   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 78/120
	I0429 19:59:25.656885   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 79/120
	I0429 19:59:26.659219   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 80/120
	I0429 19:59:27.660837   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 81/120
	I0429 19:59:28.662155   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 82/120
	I0429 19:59:29.663839   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 83/120
	I0429 19:59:30.665272   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 84/120
	I0429 19:59:31.667483   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 85/120
	I0429 19:59:32.669248   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 86/120
	I0429 19:59:33.671400   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 87/120
	I0429 19:59:34.673115   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 88/120
	I0429 19:59:35.675122   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 89/120
	I0429 19:59:36.677767   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 90/120
	I0429 19:59:37.679295   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 91/120
	I0429 19:59:38.680768   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 92/120
	I0429 19:59:39.682399   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 93/120
	I0429 19:59:40.684789   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 94/120
	I0429 19:59:41.686933   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 95/120
	I0429 19:59:42.688387   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 96/120
	I0429 19:59:43.690209   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 97/120
	I0429 19:59:44.691705   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 98/120
	I0429 19:59:45.693361   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 99/120
	I0429 19:59:46.695728   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 100/120
	I0429 19:59:47.697122   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 101/120
	I0429 19:59:48.698628   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 102/120
	I0429 19:59:49.700508   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 103/120
	I0429 19:59:50.701969   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 104/120
	I0429 19:59:51.704253   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 105/120
	I0429 19:59:52.706238   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 106/120
	I0429 19:59:53.707682   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 107/120
	I0429 19:59:54.709378   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 108/120
	I0429 19:59:55.710959   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 109/120
	I0429 19:59:56.712838   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 110/120
	I0429 19:59:57.714734   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 111/120
	I0429 19:59:58.716157   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 112/120
	I0429 19:59:59.717847   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 113/120
	I0429 20:00:00.719549   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 114/120
	I0429 20:00:01.721457   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 115/120
	I0429 20:00:02.723376   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 116/120
	I0429 20:00:03.724919   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 117/120
	I0429 20:00:04.726719   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 118/120
	I0429 20:00:05.728951   64511 main.go:141] libmachine: (embed-certs-161370) Waiting for machine to stop 119/120
	I0429 20:00:06.730253   64511 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0429 20:00:06.730346   64511 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0429 20:00:06.732738   64511 out.go:177] 
	W0429 20:00:06.734611   64511 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0429 20:00:06.734632   64511 out.go:239] * 
	* 
	W0429 20:00:06.737394   64511 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:00:06.739304   64511 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-161370 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-161370 -n embed-certs-161370
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-161370 -n embed-certs-161370: exit status 3 (18.580318939s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:00:25.322442   65523 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host
	E0429 20:00:25.322468   65523 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-161370" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-456788 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-456788 --alsologtostderr -v=3: exit status 82 (2m0.588672629s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-456788"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:58:36.253977   64769 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:58:36.254123   64769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:58:36.254135   64769 out.go:304] Setting ErrFile to fd 2...
	I0429 19:58:36.254141   64769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:58:36.254447   64769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:58:36.254766   64769 out.go:298] Setting JSON to false
	I0429 19:58:36.254875   64769 mustload.go:65] Loading cluster: no-preload-456788
	I0429 19:58:36.255358   64769 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:58:36.255450   64769 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/config.json ...
	I0429 19:58:36.255676   64769 mustload.go:65] Loading cluster: no-preload-456788
	I0429 19:58:36.255825   64769 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:58:36.255877   64769 stop.go:39] StopHost: no-preload-456788
	I0429 19:58:36.256498   64769 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 19:58:36.256558   64769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:58:36.271919   64769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0429 19:58:36.272481   64769 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:58:36.273104   64769 main.go:141] libmachine: Using API Version  1
	I0429 19:58:36.273119   64769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:58:36.273526   64769 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:58:36.276183   64769 out.go:177] * Stopping node "no-preload-456788"  ...
	I0429 19:58:36.277704   64769 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 19:58:36.277747   64769 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 19:58:36.277995   64769 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 19:58:36.278020   64769 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 19:58:36.281123   64769 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 19:58:36.281582   64769 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 19:58:36.281611   64769 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 19:58:36.281780   64769 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 19:58:36.281993   64769 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 19:58:36.282185   64769 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 19:58:36.282403   64769 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 19:58:36.401686   64769 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 19:58:36.462563   64769 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 19:58:36.526552   64769 main.go:141] libmachine: Stopping "no-preload-456788"...
	I0429 19:58:36.526584   64769 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 19:58:36.528499   64769 main.go:141] libmachine: (no-preload-456788) Calling .Stop
	I0429 19:58:36.532727   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 0/120
	I0429 19:58:37.534316   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 1/120
	I0429 19:58:38.535904   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 2/120
	I0429 19:58:39.537101   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 3/120
	I0429 19:58:40.538929   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 4/120
	I0429 19:58:41.540462   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 5/120
	I0429 19:58:42.541962   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 6/120
	I0429 19:58:43.543366   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 7/120
	I0429 19:58:44.545589   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 8/120
	I0429 19:58:45.547221   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 9/120
	I0429 19:58:46.549163   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 10/120
	I0429 19:58:47.550731   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 11/120
	I0429 19:58:48.552729   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 12/120
	I0429 19:58:49.554565   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 13/120
	I0429 19:58:50.556612   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 14/120
	I0429 19:58:51.558573   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 15/120
	I0429 19:58:52.560157   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 16/120
	I0429 19:58:53.561676   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 17/120
	I0429 19:58:54.562913   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 18/120
	I0429 19:58:55.564493   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 19/120
	I0429 19:58:56.607020   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 20/120
	I0429 19:58:57.608658   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 21/120
	I0429 19:58:58.610390   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 22/120
	I0429 19:58:59.612048   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 23/120
	I0429 19:59:00.613613   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 24/120
	I0429 19:59:01.615844   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 25/120
	I0429 19:59:02.617308   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 26/120
	I0429 19:59:03.619473   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 27/120
	I0429 19:59:04.621172   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 28/120
	I0429 19:59:05.622460   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 29/120
	I0429 19:59:06.624860   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 30/120
	I0429 19:59:07.627127   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 31/120
	I0429 19:59:08.628547   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 32/120
	I0429 19:59:09.629874   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 33/120
	I0429 19:59:10.631424   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 34/120
	I0429 19:59:11.633482   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 35/120
	I0429 19:59:12.634870   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 36/120
	I0429 19:59:13.636326   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 37/120
	I0429 19:59:14.637683   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 38/120
	I0429 19:59:15.639110   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 39/120
	I0429 19:59:16.640697   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 40/120
	I0429 19:59:17.642143   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 41/120
	I0429 19:59:18.643858   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 42/120
	I0429 19:59:19.645619   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 43/120
	I0429 19:59:20.647045   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 44/120
	I0429 19:59:21.649317   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 45/120
	I0429 19:59:22.651374   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 46/120
	I0429 19:59:23.652806   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 47/120
	I0429 19:59:24.655326   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 48/120
	I0429 19:59:25.657071   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 49/120
	I0429 19:59:26.659525   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 50/120
	I0429 19:59:27.661598   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 51/120
	I0429 19:59:28.663196   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 52/120
	I0429 19:59:29.664600   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 53/120
	I0429 19:59:30.666038   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 54/120
	I0429 19:59:31.667942   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 55/120
	I0429 19:59:32.669949   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 56/120
	I0429 19:59:33.672060   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 57/120
	I0429 19:59:34.673771   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 58/120
	I0429 19:59:35.675924   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 59/120
	I0429 19:59:36.677972   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 60/120
	I0429 19:59:37.679421   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 61/120
	I0429 19:59:38.680887   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 62/120
	I0429 19:59:39.682889   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 63/120
	I0429 19:59:40.684960   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 64/120
	I0429 19:59:41.686823   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 65/120
	I0429 19:59:42.688262   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 66/120
	I0429 19:59:43.689951   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 67/120
	I0429 19:59:44.691488   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 68/120
	I0429 19:59:45.693009   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 69/120
	I0429 19:59:46.695428   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 70/120
	I0429 19:59:47.696859   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 71/120
	I0429 19:59:48.698375   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 72/120
	I0429 19:59:49.700763   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 73/120
	I0429 19:59:50.702153   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 74/120
	I0429 19:59:51.703978   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 75/120
	I0429 19:59:52.705471   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 76/120
	I0429 19:59:53.707467   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 77/120
	I0429 19:59:54.709020   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 78/120
	I0429 19:59:55.710719   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 79/120
	I0429 19:59:56.713015   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 80/120
	I0429 19:59:57.714841   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 81/120
	I0429 19:59:58.716282   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 82/120
	I0429 19:59:59.717993   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 83/120
	I0429 20:00:00.719409   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 84/120
	I0429 20:00:01.721792   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 85/120
	I0429 20:00:02.723515   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 86/120
	I0429 20:00:03.725029   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 87/120
	I0429 20:00:04.727199   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 88/120
	I0429 20:00:05.728810   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 89/120
	I0429 20:00:06.731218   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 90/120
	I0429 20:00:07.732978   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 91/120
	I0429 20:00:08.734325   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 92/120
	I0429 20:00:09.735900   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 93/120
	I0429 20:00:10.737294   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 94/120
	I0429 20:00:11.739634   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 95/120
	I0429 20:00:12.740949   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 96/120
	I0429 20:00:13.742909   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 97/120
	I0429 20:00:14.744680   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 98/120
	I0429 20:00:15.746272   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 99/120
	I0429 20:00:16.748410   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 100/120
	I0429 20:00:17.749737   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 101/120
	I0429 20:00:18.751055   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 102/120
	I0429 20:00:19.752475   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 103/120
	I0429 20:00:20.754566   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 104/120
	I0429 20:00:21.756478   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 105/120
	I0429 20:00:22.758051   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 106/120
	I0429 20:00:23.759377   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 107/120
	I0429 20:00:24.761050   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 108/120
	I0429 20:00:25.762507   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 109/120
	I0429 20:00:26.764959   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 110/120
	I0429 20:00:27.766528   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 111/120
	I0429 20:00:28.767828   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 112/120
	I0429 20:00:29.769298   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 113/120
	I0429 20:00:30.770607   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 114/120
	I0429 20:00:31.772856   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 115/120
	I0429 20:00:32.774208   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 116/120
	I0429 20:00:33.775657   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 117/120
	I0429 20:00:34.777133   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 118/120
	I0429 20:00:35.778318   64769 main.go:141] libmachine: (no-preload-456788) Waiting for machine to stop 119/120
	I0429 20:00:36.779139   64769 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0429 20:00:36.779187   64769 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0429 20:00:36.781229   64769 out.go:177] 
	W0429 20:00:36.782771   64769 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0429 20:00:36.782796   64769 out.go:239] * 
	* 
	W0429 20:00:36.785353   64769 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:00:36.786996   64769 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-456788 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456788 -n no-preload-456788
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456788 -n no-preload-456788: exit status 3 (18.485042237s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:00:55.274401   65935 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	E0429 20:00:55.274420   65935 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-456788" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-866143 --alsologtostderr -v=3
E0429 20:00:23.951803   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-866143 --alsologtostderr -v=3: exit status 82 (2m0.549062133s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-866143"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 20:00:14.071539   65639 out.go:291] Setting OutFile to fd 1 ...
	I0429 20:00:14.072019   65639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:00:14.072076   65639 out.go:304] Setting ErrFile to fd 2...
	I0429 20:00:14.072094   65639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:00:14.072641   65639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 20:00:14.073338   65639 out.go:298] Setting JSON to false
	I0429 20:00:14.073440   65639 mustload.go:65] Loading cluster: default-k8s-diff-port-866143
	I0429 20:00:14.073883   65639 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:00:14.073945   65639 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:00:14.074150   65639 mustload.go:65] Loading cluster: default-k8s-diff-port-866143
	I0429 20:00:14.074255   65639 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:00:14.074279   65639 stop.go:39] StopHost: default-k8s-diff-port-866143
	I0429 20:00:14.074624   65639 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:00:14.074669   65639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:00:14.090873   65639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35631
	I0429 20:00:14.091310   65639 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:00:14.091887   65639 main.go:141] libmachine: Using API Version  1
	I0429 20:00:14.091914   65639 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:00:14.092319   65639 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:00:14.094799   65639 out.go:177] * Stopping node "default-k8s-diff-port-866143"  ...
	I0429 20:00:14.096392   65639 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0429 20:00:14.096434   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:00:14.096711   65639 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0429 20:00:14.096742   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:00:14.099380   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:00:14.099729   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 20:59:13 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:00:14.099751   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:00:14.099943   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:00:14.100128   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:00:14.100304   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:00:14.100415   65639 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:00:14.207950   65639 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0429 20:00:14.278401   65639 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0429 20:00:14.359561   65639 main.go:141] libmachine: Stopping "default-k8s-diff-port-866143"...
	I0429 20:00:14.359606   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:00:14.361380   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Stop
	I0429 20:00:14.364931   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 0/120
	I0429 20:00:15.366731   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 1/120
	I0429 20:00:16.368369   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 2/120
	I0429 20:00:17.369891   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 3/120
	I0429 20:00:18.371386   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 4/120
	I0429 20:00:19.373748   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 5/120
	I0429 20:00:20.375183   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 6/120
	I0429 20:00:21.376480   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 7/120
	I0429 20:00:22.377728   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 8/120
	I0429 20:00:23.379095   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 9/120
	I0429 20:00:24.381405   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 10/120
	I0429 20:00:25.382762   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 11/120
	I0429 20:00:26.384130   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 12/120
	I0429 20:00:27.385559   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 13/120
	I0429 20:00:28.386812   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 14/120
	I0429 20:00:29.388679   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 15/120
	I0429 20:00:30.390052   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 16/120
	I0429 20:00:31.391511   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 17/120
	I0429 20:00:32.392926   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 18/120
	I0429 20:00:33.394394   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 19/120
	I0429 20:00:34.396815   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 20/120
	I0429 20:00:35.398485   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 21/120
	I0429 20:00:36.400270   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 22/120
	I0429 20:00:37.401702   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 23/120
	I0429 20:00:38.403045   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 24/120
	I0429 20:00:39.405015   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 25/120
	I0429 20:00:40.406615   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 26/120
	I0429 20:00:41.408745   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 27/120
	I0429 20:00:42.410273   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 28/120
	I0429 20:00:43.411615   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 29/120
	I0429 20:00:44.413792   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 30/120
	I0429 20:00:45.415165   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 31/120
	I0429 20:00:46.416670   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 32/120
	I0429 20:00:47.417926   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 33/120
	I0429 20:00:48.419345   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 34/120
	I0429 20:00:49.421403   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 35/120
	I0429 20:00:50.422765   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 36/120
	I0429 20:00:51.424872   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 37/120
	I0429 20:00:52.426422   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 38/120
	I0429 20:00:53.428082   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 39/120
	I0429 20:00:54.430624   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 40/120
	I0429 20:00:55.431960   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 41/120
	I0429 20:00:56.433570   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 42/120
	I0429 20:00:57.435154   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 43/120
	I0429 20:00:58.436784   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 44/120
	I0429 20:00:59.438981   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 45/120
	I0429 20:01:00.440327   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 46/120
	I0429 20:01:01.441937   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 47/120
	I0429 20:01:02.443350   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 48/120
	I0429 20:01:03.444736   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 49/120
	I0429 20:01:04.447110   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 50/120
	I0429 20:01:05.449118   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 51/120
	I0429 20:01:06.450408   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 52/120
	I0429 20:01:07.451768   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 53/120
	I0429 20:01:08.453371   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 54/120
	I0429 20:01:09.455680   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 55/120
	I0429 20:01:10.457328   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 56/120
	I0429 20:01:11.458928   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 57/120
	I0429 20:01:12.460627   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 58/120
	I0429 20:01:13.462025   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 59/120
	I0429 20:01:14.463551   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 60/120
	I0429 20:01:15.465045   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 61/120
	I0429 20:01:16.466583   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 62/120
	I0429 20:01:17.468846   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 63/120
	I0429 20:01:18.470234   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 64/120
	I0429 20:01:19.472408   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 65/120
	I0429 20:01:20.473752   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 66/120
	I0429 20:01:21.475119   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 67/120
	I0429 20:01:22.476483   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 68/120
	I0429 20:01:23.477933   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 69/120
	I0429 20:01:24.480327   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 70/120
	I0429 20:01:25.481810   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 71/120
	I0429 20:01:26.483279   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 72/120
	I0429 20:01:27.484758   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 73/120
	I0429 20:01:28.486138   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 74/120
	I0429 20:01:29.488295   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 75/120
	I0429 20:01:30.489766   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 76/120
	I0429 20:01:31.491462   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 77/120
	I0429 20:01:32.492780   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 78/120
	I0429 20:01:33.494429   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 79/120
	I0429 20:01:34.496780   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 80/120
	I0429 20:01:35.498045   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 81/120
	I0429 20:01:36.499461   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 82/120
	I0429 20:01:37.500997   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 83/120
	I0429 20:01:38.502406   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 84/120
	I0429 20:01:39.504934   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 85/120
	I0429 20:01:40.506418   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 86/120
	I0429 20:01:41.507964   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 87/120
	I0429 20:01:42.509370   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 88/120
	I0429 20:01:43.510742   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 89/120
	I0429 20:01:44.512969   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 90/120
	I0429 20:01:45.514391   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 91/120
	I0429 20:01:46.515648   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 92/120
	I0429 20:01:47.516860   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 93/120
	I0429 20:01:48.518154   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 94/120
	I0429 20:01:49.520059   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 95/120
	I0429 20:01:50.521474   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 96/120
	I0429 20:01:51.522784   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 97/120
	I0429 20:01:52.524153   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 98/120
	I0429 20:01:53.525462   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 99/120
	I0429 20:01:54.527837   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 100/120
	I0429 20:01:55.529077   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 101/120
	I0429 20:01:56.530778   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 102/120
	I0429 20:01:57.532082   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 103/120
	I0429 20:01:58.533509   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 104/120
	I0429 20:01:59.535362   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 105/120
	I0429 20:02:00.536837   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 106/120
	I0429 20:02:01.538051   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 107/120
	I0429 20:02:02.539655   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 108/120
	I0429 20:02:03.540983   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 109/120
	I0429 20:02:04.543175   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 110/120
	I0429 20:02:05.544515   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 111/120
	I0429 20:02:06.545906   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 112/120
	I0429 20:02:07.547241   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 113/120
	I0429 20:02:08.548692   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 114/120
	I0429 20:02:09.550977   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 115/120
	I0429 20:02:10.552366   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 116/120
	I0429 20:02:11.553758   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 117/120
	I0429 20:02:12.555106   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 118/120
	I0429 20:02:13.556465   65639 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for machine to stop 119/120
	I0429 20:02:14.557066   65639 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0429 20:02:14.557119   65639 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0429 20:02:14.559238   65639 out.go:177] 
	W0429 20:02:14.560795   65639 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0429 20:02:14.560812   65639 out.go:239] * 
	* 
	W0429 20:02:14.563532   65639 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:02:14.564935   65639 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-866143 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143: exit status 3 (18.499975156s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:02:33.066412   66667 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.106:22: connect: no route to host
	E0429 20:02:33.066431   66667 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.106:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-866143" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-161370 -n embed-certs-161370
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-161370 -n embed-certs-161370: exit status 3 (3.167775796s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:00:28.490433   65689 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host
	E0429 20:00:28.490458   65689 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-161370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-161370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153094166s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-161370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-161370 -n embed-certs-161370
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-161370 -n embed-certs-161370: exit status 3 (3.06265207s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:00:37.706489   65771 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host
	E0429 20:00:37.706520   65771 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-161370" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-919612 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-919612 create -f testdata/busybox.yaml: exit status 1 (41.343328ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-919612" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-919612 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 6 (228.393227ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:00:35.899836   65846 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-919612" does not appear in /home/jenkins/minikube-integration/18774-7754/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-919612" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 6 (234.565842ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:00:36.135624   65876 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-919612" does not appear in /home/jenkins/minikube-integration/18774-7754/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-919612" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (88.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-919612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-919612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m27.962682832s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-919612 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-919612 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-919612 describe deploy/metrics-server -n kube-system: exit status 1 (42.445372ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-919612" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-919612 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 6 (231.971609ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:02:04.372292   66486 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-919612" does not appear in /home/jenkins/minikube-integration/18774-7754/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-919612" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (88.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456788 -n no-preload-456788
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456788 -n no-preload-456788: exit status 3 (3.167799957s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:00:58.442397   66091 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	E0429 20:00:58.442414   66091 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-456788 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-456788 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153173354s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-456788 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456788 -n no-preload-456788
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456788 -n no-preload-456788: exit status 3 (3.06437963s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:01:07.658468   66172 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host
	E0429 20:01:07.658491   66172 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.235:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-456788" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (722.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-919612 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-919612 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m58.630228563s)

                                                
                                                
-- stdout --
	* [old-k8s-version-919612] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-919612" primary control-plane node in "old-k8s-version-919612" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-919612" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 20:02:09.954532   66615 out.go:291] Setting OutFile to fd 1 ...
	I0429 20:02:09.954659   66615 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:09.954677   66615 out.go:304] Setting ErrFile to fd 2...
	I0429 20:02:09.954683   66615 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:09.954884   66615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 20:02:09.955472   66615 out.go:298] Setting JSON to false
	I0429 20:02:09.956525   66615 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6228,"bootTime":1714414702,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 20:02:09.956586   66615 start.go:139] virtualization: kvm guest
	I0429 20:02:09.958701   66615 out.go:177] * [old-k8s-version-919612] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 20:02:09.960023   66615 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:02:09.961177   66615 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:02:09.960054   66615 notify.go:220] Checking for updates...
	I0429 20:02:09.963571   66615 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:02:09.964796   66615 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:02:09.966112   66615 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 20:02:09.967487   66615 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:02:09.969161   66615 config.go:182] Loaded profile config "old-k8s-version-919612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 20:02:09.969526   66615 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:09.969568   66615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:09.985510   66615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I0429 20:02:09.986026   66615 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:09.986591   66615 main.go:141] libmachine: Using API Version  1
	I0429 20:02:09.986613   66615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:09.986986   66615 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:09.987178   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:02:09.989127   66615 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0429 20:02:09.990409   66615 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:02:09.990693   66615 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:09.990726   66615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:10.005530   66615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41469
	I0429 20:02:10.005897   66615 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:10.006343   66615 main.go:141] libmachine: Using API Version  1
	I0429 20:02:10.006370   66615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:10.006686   66615 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:10.006866   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:02:10.041395   66615 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 20:02:10.042879   66615 start.go:297] selected driver: kvm2
	I0429 20:02:10.042891   66615 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:10.043008   66615 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:02:10.043642   66615 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:10.043707   66615 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 20:02:10.058302   66615 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 20:02:10.058632   66615 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:02:10.058693   66615 cni.go:84] Creating CNI manager for ""
	I0429 20:02:10.058706   66615 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:02:10.058748   66615 start.go:340] cluster config:
	{Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:10.058857   66615 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:10.060599   66615 out.go:177] * Starting "old-k8s-version-919612" primary control-plane node in "old-k8s-version-919612" cluster
	I0429 20:02:10.061730   66615 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 20:02:10.061765   66615 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0429 20:02:10.061777   66615 cache.go:56] Caching tarball of preloaded images
	I0429 20:02:10.061869   66615 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 20:02:10.061882   66615 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0429 20:02:10.061963   66615 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json ...
	I0429 20:02:10.062166   66615 start.go:360] acquireMachinesLock for old-k8s-version-919612: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:05:35.155600   66615 start.go:364] duration metric: took 3m25.093405289s to acquireMachinesLock for "old-k8s-version-919612"
	I0429 20:05:35.155655   66615 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:35.155661   66615 fix.go:54] fixHost starting: 
	I0429 20:05:35.155999   66615 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:35.156034   66615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:35.173332   66615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0429 20:05:35.173754   66615 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:35.174261   66615 main.go:141] libmachine: Using API Version  1
	I0429 20:05:35.174294   66615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:35.174602   66615 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:35.174797   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:35.174987   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetState
	I0429 20:05:35.176453   66615 fix.go:112] recreateIfNeeded on old-k8s-version-919612: state=Stopped err=<nil>
	I0429 20:05:35.176478   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	W0429 20:05:35.176647   66615 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:35.178966   66615 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-919612" ...
	I0429 20:05:35.180393   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .Start
	I0429 20:05:35.180576   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring networks are active...
	I0429 20:05:35.181281   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network default is active
	I0429 20:05:35.181678   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network mk-old-k8s-version-919612 is active
	I0429 20:05:35.182102   66615 main.go:141] libmachine: (old-k8s-version-919612) Getting domain xml...
	I0429 20:05:35.182867   66615 main.go:141] libmachine: (old-k8s-version-919612) Creating domain...
	I0429 20:05:36.459478   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting to get IP...
	I0429 20:05:36.460301   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.460751   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.460817   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.460706   67552 retry.go:31] will retry after 280.48781ms: waiting for machine to come up
	I0429 20:05:36.743188   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.743630   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.743658   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.743591   67552 retry.go:31] will retry after 326.238132ms: waiting for machine to come up
	I0429 20:05:37.071146   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.071576   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.071609   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.071527   67552 retry.go:31] will retry after 380.72234ms: waiting for machine to come up
	I0429 20:05:37.453967   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.454435   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.454464   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.454385   67552 retry.go:31] will retry after 593.303053ms: waiting for machine to come up
	I0429 20:05:38.049072   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.049555   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.049587   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.049500   67552 retry.go:31] will retry after 694.752524ms: waiting for machine to come up
	I0429 20:05:38.746542   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.747034   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.747065   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.747002   67552 retry.go:31] will retry after 860.161186ms: waiting for machine to come up
	I0429 20:05:39.609098   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:39.609601   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:39.609634   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:39.609544   67552 retry.go:31] will retry after 726.889681ms: waiting for machine to come up
	I0429 20:05:40.338292   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:40.338823   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:40.338864   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:40.338757   67552 retry.go:31] will retry after 1.310400969s: waiting for machine to come up
	I0429 20:05:41.651107   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:41.651625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:41.651670   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:41.651575   67552 retry.go:31] will retry after 1.769756679s: waiting for machine to come up
	I0429 20:05:43.423326   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:43.423829   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:43.423869   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:43.423790   67552 retry.go:31] will retry after 1.748237944s: waiting for machine to come up
	I0429 20:05:45.173157   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:45.173617   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:45.173642   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:45.173563   67552 retry.go:31] will retry after 2.784243469s: waiting for machine to come up
	I0429 20:05:47.959942   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:47.960473   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:47.960508   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:47.960410   67552 retry.go:31] will retry after 3.046526969s: waiting for machine to come up
	I0429 20:05:51.007941   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:51.008230   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:51.008253   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:51.008213   67552 retry.go:31] will retry after 4.220985004s: waiting for machine to come up
	I0429 20:05:55.230409   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230860   66615 main.go:141] libmachine: (old-k8s-version-919612) Found IP for machine: 192.168.72.240
	I0429 20:05:55.230889   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has current primary IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230898   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserving static IP address...
	I0429 20:05:55.231252   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.231287   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | skip adding static IP to network mk-old-k8s-version-919612 - found existing host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"}
	I0429 20:05:55.231305   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserved static IP address: 192.168.72.240
	I0429 20:05:55.231319   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting for SSH to be available...
	I0429 20:05:55.231335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Getting to WaitForSSH function...
	I0429 20:05:55.233198   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.233500   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH client type: external
	I0429 20:05:55.233671   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa (-rw-------)
	I0429 20:05:55.233706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:55.233730   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | About to run SSH command:
	I0429 20:05:55.233747   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | exit 0
	I0429 20:05:55.354242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:55.354584   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetConfigRaw
	I0429 20:05:55.355221   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.357791   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.358276   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358564   66615 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json ...
	I0429 20:05:55.358786   66615 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:55.358807   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:55.359037   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.361536   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.361861   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.361885   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.362048   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.362247   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362416   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362568   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.362733   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.362930   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.362943   66615 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:55.462364   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:55.462388   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462632   66615 buildroot.go:166] provisioning hostname "old-k8s-version-919612"
	I0429 20:05:55.462669   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462852   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.465335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465674   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.465706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465836   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.466034   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466208   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466366   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.466525   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.466729   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.466745   66615 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-919612 && echo "old-k8s-version-919612" | sudo tee /etc/hostname
	I0429 20:05:55.596239   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-919612
	
	I0429 20:05:55.596281   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.599221   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599575   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.599606   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599770   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.599970   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600122   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600316   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.600498   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.600667   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.600690   66615 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-919612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-919612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-919612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:55.716588   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:55.716621   66615 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:55.716647   66615 buildroot.go:174] setting up certificates
	I0429 20:05:55.716658   66615 provision.go:84] configureAuth start
	I0429 20:05:55.716671   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.716956   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.719569   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.719919   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.719956   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.720095   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.722484   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.722876   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.722912   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.723036   66615 provision.go:143] copyHostCerts
	I0429 20:05:55.723087   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:55.723097   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:55.723158   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:55.723253   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:55.723262   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:55.723280   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:55.723336   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:55.723342   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:55.723358   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:55.723404   66615 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-919612 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-919612]
	I0429 20:05:55.878639   66615 provision.go:177] copyRemoteCerts
	I0429 20:05:55.878724   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:55.878750   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.881746   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882306   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.882358   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882540   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.882743   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.882986   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.883139   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:55.973158   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:56.003094   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0429 20:05:56.031670   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:56.059049   66615 provision.go:87] duration metric: took 342.376371ms to configureAuth
	I0429 20:05:56.059091   66615 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:56.059335   66615 config.go:182] Loaded profile config "old-k8s-version-919612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 20:05:56.059441   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.062416   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.062887   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.062921   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.063082   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.063322   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063521   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063688   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.063901   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.064066   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.064082   66615 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:56.342484   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:56.342511   66615 machine.go:97] duration metric: took 983.711183ms to provisionDockerMachine
	I0429 20:05:56.342525   66615 start.go:293] postStartSetup for "old-k8s-version-919612" (driver="kvm2")
	I0429 20:05:56.342540   66615 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:56.342589   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.342931   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:56.342983   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.345399   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345710   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.345731   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345869   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.346047   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.346233   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.346418   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.431189   66615 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:56.435878   66615 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:56.435903   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:56.435983   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:56.436086   66615 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:56.436170   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:56.445841   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:56.472683   66615 start.go:296] duration metric: took 130.146591ms for postStartSetup
	I0429 20:05:56.472715   66615 fix.go:56] duration metric: took 21.31705375s for fixHost
	I0429 20:05:56.472736   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.475127   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.475492   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475624   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.475857   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476055   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476211   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.476378   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.476536   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.476547   66615 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 20:05:56.578999   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421156.548872445
	
	I0429 20:05:56.579028   66615 fix.go:216] guest clock: 1714421156.548872445
	I0429 20:05:56.579040   66615 fix.go:229] Guest: 2024-04-29 20:05:56.548872445 +0000 UTC Remote: 2024-04-29 20:05:56.472718546 +0000 UTC m=+226.572342220 (delta=76.153899ms)
	I0429 20:05:56.579068   66615 fix.go:200] guest clock delta is within tolerance: 76.153899ms
	I0429 20:05:56.579076   66615 start.go:83] releasing machines lock for "old-k8s-version-919612", held for 21.423436193s
	I0429 20:05:56.579111   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.579407   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:56.582338   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582673   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.582711   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582856   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583365   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583543   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583625   66615 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:56.583667   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.583765   66615 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:56.583805   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.586263   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586552   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586618   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586656   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586891   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.586953   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586989   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.587060   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587170   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.587240   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587310   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587458   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587462   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.587600   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.672678   66615 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:56.694175   66615 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:56.859009   66615 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:56.865723   66615 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:56.865798   66615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:56.885686   66615 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:56.885714   66615 start.go:494] detecting cgroup driver to use...
	I0429 20:05:56.885805   66615 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:56.909082   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:56.931583   66615 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:56.931646   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:56.953524   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:56.976170   66615 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:57.122813   66615 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:57.315725   66615 docker.go:233] disabling docker service ...
	I0429 20:05:57.315786   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:57.333927   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:57.350022   66615 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:57.525787   66615 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:57.685802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:57.703246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:57.730558   66615 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 20:05:57.730618   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.747081   66615 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:57.747133   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.760168   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.773553   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.787609   66615 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:57.800532   66615 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:57.813582   66615 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:57.813669   66615 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:57.832224   66615 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:57.844783   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:57.991666   66615 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:58.183635   66615 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:58.183718   66615 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:58.189441   66615 start.go:562] Will wait 60s for crictl version
	I0429 20:05:58.189509   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:05:58.194049   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:58.250751   66615 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:58.250839   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.292368   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.336121   66615 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0429 20:05:58.337389   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:58.340707   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341125   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:58.341153   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341387   66615 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:58.346434   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:58.361081   66615 kubeadm.go:877] updating cluster {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:58.361242   66615 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 20:05:58.361307   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:58.414304   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:05:58.414366   66615 ssh_runner.go:195] Run: which lz4
	I0429 20:05:58.420584   66615 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0429 20:05:58.425682   66615 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:05:58.425712   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0429 20:06:00.520217   66615 crio.go:462] duration metric: took 2.099664395s to copy over tarball
	I0429 20:06:00.520314   66615 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:04.082476   66615 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562128598s)
	I0429 20:06:04.082527   66615 crio.go:469] duration metric: took 3.562271241s to extract the tarball
	I0429 20:06:04.082538   66615 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:04.129338   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:04.177683   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:06:04.177709   66615 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:06:04.177762   66615 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.177798   66615 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.177817   66615 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.177834   66615 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.177835   66615 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.177783   66615 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.177897   66615 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0429 20:06:04.177972   66615 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179282   66615 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.179360   66615 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.179361   66615 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.179320   66615 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.179331   66615 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.179299   66615 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0429 20:06:04.323997   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376145   66615 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0429 20:06:04.376210   66615 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376261   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.381592   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.420565   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0429 20:06:04.440670   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0429 20:06:04.461763   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.499283   66615 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0429 20:06:04.499347   66615 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0429 20:06:04.499404   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513860   66615 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0429 20:06:04.513900   66615 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.513946   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513988   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0429 20:06:04.548990   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.556713   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.556942   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.556965   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0429 20:06:04.566227   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.598982   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.656930   66615 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0429 20:06:04.656980   66615 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.657038   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.724922   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0429 20:06:04.725179   66615 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0429 20:06:04.725218   66615 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.725262   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732375   66615 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0429 20:06:04.732429   66615 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.732482   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732492   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.732483   66615 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0429 20:06:04.732669   66615 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.732726   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.735419   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.739785   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.742496   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.834684   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0429 20:06:04.834754   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0429 20:06:04.834811   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0429 20:06:04.847076   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0429 20:06:05.091766   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:05.269730   66615 cache_images.go:92] duration metric: took 1.092006107s to LoadCachedImages
	W0429 20:06:05.269839   66615 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0429 20:06:05.269857   66615 kubeadm.go:928] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I0429 20:06:05.269988   66615 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-919612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:05.270088   66615 ssh_runner.go:195] Run: crio config
	I0429 20:06:05.322439   66615 cni.go:84] Creating CNI manager for ""
	I0429 20:06:05.322471   66615 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:05.322486   66615 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:05.322522   66615 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-919612 NodeName:old-k8s-version-919612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 20:06:05.322746   66615 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-919612"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:05.322810   66615 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 20:06:05.340981   66615 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:05.341058   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:05.357048   66615 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0429 20:06:05.384352   66615 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:05.407887   66615 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0429 20:06:05.431531   66615 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:05.437567   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:05.457652   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:05.610358   66615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:05.641538   66615 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612 for IP: 192.168.72.240
	I0429 20:06:05.641568   66615 certs.go:194] generating shared ca certs ...
	I0429 20:06:05.641583   66615 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:05.641758   66615 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:05.641831   66615 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:05.641843   66615 certs.go:256] generating profile certs ...
	I0429 20:06:05.641948   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.key
	I0429 20:06:05.642020   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key.5df5e618
	I0429 20:06:05.642083   66615 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key
	I0429 20:06:05.642256   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:05.642304   66615 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:05.642325   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:05.642364   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:05.642401   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:05.642435   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:05.642489   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:05.643156   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:05.691350   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:05.734434   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:05.773056   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:05.819778   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 20:06:05.868256   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:05.911589   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:05.957714   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 20:06:06.002120   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:06.039736   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:06.079636   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:06.118317   66615 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:06.145932   66615 ssh_runner.go:195] Run: openssl version
	I0429 20:06:06.152970   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:06.166609   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.171939   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.172033   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.179153   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:06.193491   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:06.207800   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214803   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214876   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.222154   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:06.236908   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:06.254197   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260797   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260863   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.267635   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:06.282727   66615 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:06.289580   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:06.301014   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:06.310503   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:06.318708   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:06.325718   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:06.332690   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:06.339914   66615 kubeadm.go:391] StartCluster: {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:06.340012   66615 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:06.340069   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.391511   66615 cri.go:89] found id: ""
	I0429 20:06:06.391618   66615 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:06.408955   66615 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:06.408985   66615 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:06.408991   66615 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:06.409060   66615 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:06.425276   66615 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:06.426397   66615 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-919612" does not appear in /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:06.427298   66615 kubeconfig.go:62] /home/jenkins/minikube-integration/18774-7754/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-919612" cluster setting kubeconfig missing "old-k8s-version-919612" context setting]
	I0429 20:06:06.428287   66615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:06.429908   66615 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:06.443630   66615 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.240
	I0429 20:06:06.443674   66615 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:06.443686   66615 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:06.443753   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.486251   66615 cri.go:89] found id: ""
	I0429 20:06:06.486339   66615 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:06.507136   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:06.523798   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:06.523828   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:06.523887   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:06.536668   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:06.536735   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:06.547800   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:06.560435   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:06.560517   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:06.572227   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.582772   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:06.582825   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.594168   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:06.605940   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:06.606013   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:06.621829   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:06.637520   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:06.779910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:07.921143   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.141191032s)
	I0429 20:06:07.921178   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.172381   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.276243   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.398312   66615 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:08.398424   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:08.899388   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.399344   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.898731   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:10.399055   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:10.898742   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.399250   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.898511   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.399301   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.899399   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.399242   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.899417   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.398526   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.898976   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:15.399474   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:15.899352   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.899106   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.899205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.399351   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.899319   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.399303   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.898824   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.399233   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.898571   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.398855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.898885   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.399328   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.398965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.899248   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.398833   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.899039   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:25.398515   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:25.898944   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.399360   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.899294   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.399520   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.899434   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.398734   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.898479   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.399413   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.899236   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:30.398730   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:30.898542   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.399309   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.898751   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.399374   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.899262   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.398723   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.899281   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.399356   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.899305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.399419   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.899244   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.398934   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.898847   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.399273   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.899102   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.398748   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.399524   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.898813   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:40.399024   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:40.899056   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.399275   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.899285   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.399200   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.899243   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.398590   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.899346   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:45.398908   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:45.898619   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.398795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.899058   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.399257   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.899269   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.398874   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.898653   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.399305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.898855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.398577   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.899284   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.399361   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.899134   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.399211   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.898733   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.399280   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.898915   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.399264   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.898840   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:55.398622   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:55.898563   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.399306   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.898473   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.899278   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.399121   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.899291   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.399197   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.898901   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.398537   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.899359   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.399125   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.899428   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.399457   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.899355   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.399421   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.899376   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.399331   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.899263   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:05.398458   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:05.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.399205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.399308   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.898749   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:08.399182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:08.399271   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:08.448015   66615 cri.go:89] found id: ""
	I0429 20:07:08.448041   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.448049   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:08.448055   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:08.448103   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:08.491239   66615 cri.go:89] found id: ""
	I0429 20:07:08.491265   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.491274   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:08.491280   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:08.491330   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:08.541203   66615 cri.go:89] found id: ""
	I0429 20:07:08.541226   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.541234   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:08.541239   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:08.541300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:08.584370   66615 cri.go:89] found id: ""
	I0429 20:07:08.584393   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.584401   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:08.584407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:08.584469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:08.625126   66615 cri.go:89] found id: ""
	I0429 20:07:08.625158   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.625169   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:08.625182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:08.625246   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:08.666987   66615 cri.go:89] found id: ""
	I0429 20:07:08.667018   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.667032   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:08.667039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:08.667105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:08.712363   66615 cri.go:89] found id: ""
	I0429 20:07:08.712394   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.712405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:08.712413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:08.712471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:08.762122   66615 cri.go:89] found id: ""
	I0429 20:07:08.762151   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.762170   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:08.762180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:08.762196   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:08.808218   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:08.808246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:08.867278   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:08.867317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:08.884230   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:08.884266   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:09.018183   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:09.018208   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:09.018224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:11.587112   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:11.603711   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:11.603781   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:11.651087   66615 cri.go:89] found id: ""
	I0429 20:07:11.651115   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.651123   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:11.651128   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:11.651192   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:11.691888   66615 cri.go:89] found id: ""
	I0429 20:07:11.691914   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.691921   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:11.691928   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:11.691976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:11.733411   66615 cri.go:89] found id: ""
	I0429 20:07:11.733441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.733452   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:11.733460   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:11.733517   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:11.774620   66615 cri.go:89] found id: ""
	I0429 20:07:11.774648   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.774659   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:11.774666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:11.774729   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:11.821410   66615 cri.go:89] found id: ""
	I0429 20:07:11.821441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.821449   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:11.821455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:11.821502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:11.864699   66615 cri.go:89] found id: ""
	I0429 20:07:11.864730   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.864741   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:11.864749   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:11.864809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:11.904637   66615 cri.go:89] found id: ""
	I0429 20:07:11.904678   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.904687   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:11.904693   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:11.904742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:11.970914   66615 cri.go:89] found id: ""
	I0429 20:07:11.970945   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.970957   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:11.970968   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:11.970984   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:12.024185   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:12.024226   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:12.040319   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:12.040349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:12.137888   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:12.137915   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:12.137941   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:12.210256   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:12.210290   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:14.758756   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:14.775321   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:14.775386   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:14.812637   66615 cri.go:89] found id: ""
	I0429 20:07:14.812662   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.812672   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:14.812679   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:14.812735   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:14.851503   66615 cri.go:89] found id: ""
	I0429 20:07:14.851536   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.851547   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:14.851554   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:14.851613   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:14.885708   66615 cri.go:89] found id: ""
	I0429 20:07:14.885739   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.885749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:14.885756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:14.885817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:14.926133   66615 cri.go:89] found id: ""
	I0429 20:07:14.926162   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.926173   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:14.926181   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:14.926240   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:14.967553   66615 cri.go:89] found id: ""
	I0429 20:07:14.967582   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.967593   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:14.967601   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:14.967659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:15.006174   66615 cri.go:89] found id: ""
	I0429 20:07:15.006199   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.006207   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:15.006218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:15.006293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:15.046916   66615 cri.go:89] found id: ""
	I0429 20:07:15.046940   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.046947   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:15.046953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:15.047009   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:15.089229   66615 cri.go:89] found id: ""
	I0429 20:07:15.089256   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.089266   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:15.089278   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:15.089298   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:15.143518   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:15.143561   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:15.162742   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:15.162769   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:15.242850   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:15.242872   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:15.242884   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:15.315783   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:15.315825   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:17.863336   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:17.877802   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:17.877869   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:17.935714   66615 cri.go:89] found id: ""
	I0429 20:07:17.935738   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.935746   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:17.935754   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:17.935810   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:17.988496   66615 cri.go:89] found id: ""
	I0429 20:07:17.988529   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.988540   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:17.988547   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:17.988610   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:18.030695   66615 cri.go:89] found id: ""
	I0429 20:07:18.030726   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.030737   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:18.030745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:18.030822   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:18.077452   66615 cri.go:89] found id: ""
	I0429 20:07:18.077481   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.077491   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:18.077498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:18.077561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:18.120102   66615 cri.go:89] found id: ""
	I0429 20:07:18.120127   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.120136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:18.120141   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:18.120200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:18.163440   66615 cri.go:89] found id: ""
	I0429 20:07:18.163469   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.163480   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:18.163487   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:18.163549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:18.202650   66615 cri.go:89] found id: ""
	I0429 20:07:18.202680   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.202693   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:18.202699   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:18.202760   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:18.244378   66615 cri.go:89] found id: ""
	I0429 20:07:18.244408   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.244418   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:18.244429   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:18.244446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:18.289246   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:18.289279   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:18.343382   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:18.343425   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:18.359070   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:18.359103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:18.440316   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:18.440337   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:18.440351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:21.019552   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:21.036407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:21.036523   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:21.083148   66615 cri.go:89] found id: ""
	I0429 20:07:21.083171   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.083179   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:21.083184   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:21.083231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:21.129382   66615 cri.go:89] found id: ""
	I0429 20:07:21.129415   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.129426   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:21.129434   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:21.129502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:21.172978   66615 cri.go:89] found id: ""
	I0429 20:07:21.173007   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.173015   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:21.173020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:21.173068   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:21.218124   66615 cri.go:89] found id: ""
	I0429 20:07:21.218159   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.218171   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:21.218178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:21.218243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:21.260603   66615 cri.go:89] found id: ""
	I0429 20:07:21.260640   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.260651   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:21.260658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:21.260723   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:21.302351   66615 cri.go:89] found id: ""
	I0429 20:07:21.302386   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.302398   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:21.302407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:21.302498   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:21.347003   66615 cri.go:89] found id: ""
	I0429 20:07:21.347028   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.347037   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:21.347043   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:21.347098   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:21.388202   66615 cri.go:89] found id: ""
	I0429 20:07:21.388236   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.388245   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:21.388257   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:21.388272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:21.442706   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:21.442744   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:21.457453   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:21.457489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:21.539669   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:21.539695   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:21.539707   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:21.625210   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:21.625247   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.173256   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:24.189920   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:24.189990   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:24.236730   66615 cri.go:89] found id: ""
	I0429 20:07:24.236761   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.236772   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:24.236779   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:24.236843   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:24.279031   66615 cri.go:89] found id: ""
	I0429 20:07:24.279055   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.279062   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:24.279067   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:24.279112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:24.321622   66615 cri.go:89] found id: ""
	I0429 20:07:24.321647   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.321657   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:24.321665   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:24.321726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:24.360884   66615 cri.go:89] found id: ""
	I0429 20:07:24.360911   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.360919   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:24.360924   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:24.360983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:24.414439   66615 cri.go:89] found id: ""
	I0429 20:07:24.414463   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.414472   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:24.414477   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:24.414559   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:24.456994   66615 cri.go:89] found id: ""
	I0429 20:07:24.457023   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.457033   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:24.457041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:24.457107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:24.497991   66615 cri.go:89] found id: ""
	I0429 20:07:24.498026   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.498036   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:24.498044   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:24.498137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:24.539375   66615 cri.go:89] found id: ""
	I0429 20:07:24.539415   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.539426   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:24.539438   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:24.539453   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:24.661778   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:24.661804   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:24.661820   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:24.748180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:24.748215   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.795963   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:24.795999   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:24.851485   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:24.851524   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:27.367869   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:27.385633   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:27.385716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:27.423181   66615 cri.go:89] found id: ""
	I0429 20:07:27.423210   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.423222   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:27.423233   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:27.423293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:27.467385   66615 cri.go:89] found id: ""
	I0429 20:07:27.467419   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.467432   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:27.467439   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:27.467503   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:27.506171   66615 cri.go:89] found id: ""
	I0429 20:07:27.506204   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.506216   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:27.506223   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:27.506272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:27.545043   66615 cri.go:89] found id: ""
	I0429 20:07:27.545066   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.545074   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:27.545080   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:27.545136   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:27.592279   66615 cri.go:89] found id: ""
	I0429 20:07:27.592306   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.592314   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:27.592320   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:27.592379   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:27.628569   66615 cri.go:89] found id: ""
	I0429 20:07:27.628595   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.628604   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:27.628612   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:27.628659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:27.667937   66615 cri.go:89] found id: ""
	I0429 20:07:27.667967   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.667978   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:27.667985   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:27.668047   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:27.708813   66615 cri.go:89] found id: ""
	I0429 20:07:27.708844   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.708853   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:27.708861   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:27.708876   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:27.789589   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:27.789625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:27.837147   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:27.837180   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:27.891928   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:27.891956   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:27.906162   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:27.906188   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:27.983738   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:30.484404   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:30.503968   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:30.504041   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:30.553070   66615 cri.go:89] found id: ""
	I0429 20:07:30.553099   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.553111   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:30.553118   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:30.553180   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:30.609226   66615 cri.go:89] found id: ""
	I0429 20:07:30.609253   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.609262   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:30.609267   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:30.609324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:30.658359   66615 cri.go:89] found id: ""
	I0429 20:07:30.658384   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.658395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:30.658401   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:30.658459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:30.710024   66615 cri.go:89] found id: ""
	I0429 20:07:30.710048   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.710058   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:30.710114   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:30.710173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:30.752361   66615 cri.go:89] found id: ""
	I0429 20:07:30.752388   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.752398   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:30.752405   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:30.752469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:30.793311   66615 cri.go:89] found id: ""
	I0429 20:07:30.793333   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.793341   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:30.793347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:30.793394   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:30.832371   66615 cri.go:89] found id: ""
	I0429 20:07:30.832400   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.832411   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:30.832417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:30.832469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:30.871183   66615 cri.go:89] found id: ""
	I0429 20:07:30.871215   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.871226   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:30.871237   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:30.871253   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:30.929909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:30.929947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:30.944454   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:30.944482   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:31.022060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:31.022100   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:31.022116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:31.104142   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:31.104185   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:33.651167   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:33.667888   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:33.667948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:33.708455   66615 cri.go:89] found id: ""
	I0429 20:07:33.708484   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.708495   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:33.708502   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:33.708561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:33.747578   66615 cri.go:89] found id: ""
	I0429 20:07:33.747602   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.747611   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:33.747616   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:33.747661   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:33.796005   66615 cri.go:89] found id: ""
	I0429 20:07:33.796036   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.796056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:33.796064   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:33.796128   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:33.836238   66615 cri.go:89] found id: ""
	I0429 20:07:33.836263   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.836271   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:33.836276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:33.836324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:33.877010   66615 cri.go:89] found id: ""
	I0429 20:07:33.877043   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.877056   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:33.877065   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:33.877137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:33.919690   66615 cri.go:89] found id: ""
	I0429 20:07:33.919714   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.919722   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:33.919727   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:33.919797   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:33.959857   66615 cri.go:89] found id: ""
	I0429 20:07:33.959889   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.959900   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:33.959907   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:33.959989   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:33.996349   66615 cri.go:89] found id: ""
	I0429 20:07:33.996376   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.996386   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:33.996396   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:33.996433   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:34.010773   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:34.010808   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:34.091581   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:34.091599   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:34.091611   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:34.173266   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:34.173299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:34.221447   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:34.221479   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:36.776486   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:36.791630   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:36.791764   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:36.837475   66615 cri.go:89] found id: ""
	I0429 20:07:36.837503   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.837513   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:36.837521   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:36.837607   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.879902   66615 cri.go:89] found id: ""
	I0429 20:07:36.879936   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.879947   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:36.879954   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:36.880021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:36.918566   66615 cri.go:89] found id: ""
	I0429 20:07:36.918594   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.918608   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:36.918613   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:36.918659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:36.958876   66615 cri.go:89] found id: ""
	I0429 20:07:36.958937   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.958948   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:36.958959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:36.959008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:36.998790   66615 cri.go:89] found id: ""
	I0429 20:07:36.998820   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.998845   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:36.998864   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:36.998932   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:37.036933   66615 cri.go:89] found id: ""
	I0429 20:07:37.036962   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.036972   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:37.036979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:37.037024   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:37.076560   66615 cri.go:89] found id: ""
	I0429 20:07:37.076597   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.076609   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:37.076616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:37.076688   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:37.118324   66615 cri.go:89] found id: ""
	I0429 20:07:37.118351   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.118360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:37.118368   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:37.118380   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:37.194671   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:37.194714   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:37.236269   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:37.236300   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:37.297006   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:37.297061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:37.312696   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:37.312723   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:37.387132   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:39.888111   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:39.903157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:39.903236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:39.945913   66615 cri.go:89] found id: ""
	I0429 20:07:39.945945   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.945956   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:39.945980   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:39.946076   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:39.986494   66615 cri.go:89] found id: ""
	I0429 20:07:39.986521   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.986530   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:39.986538   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:39.986598   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:40.031481   66615 cri.go:89] found id: ""
	I0429 20:07:40.031520   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.031531   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:40.031539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:40.031604   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:40.076792   66615 cri.go:89] found id: ""
	I0429 20:07:40.076816   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.076824   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:40.076830   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:40.076877   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:40.121020   66615 cri.go:89] found id: ""
	I0429 20:07:40.121050   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.121061   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:40.121068   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:40.121134   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:40.173189   66615 cri.go:89] found id: ""
	I0429 20:07:40.173221   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.173233   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:40.173241   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:40.173303   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:40.220190   66615 cri.go:89] found id: ""
	I0429 20:07:40.220212   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.220223   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:40.220229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:40.220293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:40.262552   66615 cri.go:89] found id: ""
	I0429 20:07:40.262579   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.262588   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:40.262600   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:40.262616   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:40.322249   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:40.322289   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:40.338703   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:40.338734   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:40.431311   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:40.431333   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:40.431345   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:40.518410   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:40.518446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:43.062556   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:43.077757   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:43.077844   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:43.129247   66615 cri.go:89] found id: ""
	I0429 20:07:43.129277   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.129289   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:43.129296   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:43.129364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:43.173474   66615 cri.go:89] found id: ""
	I0429 20:07:43.173501   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.173509   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:43.173514   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:43.173566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:43.218788   66615 cri.go:89] found id: ""
	I0429 20:07:43.218812   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.218820   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:43.218825   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:43.218873   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:43.259269   66615 cri.go:89] found id: ""
	I0429 20:07:43.259289   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.259297   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:43.259302   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:43.259362   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:43.301152   66615 cri.go:89] found id: ""
	I0429 20:07:43.301180   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.301189   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:43.301195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:43.301244   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:43.338183   66615 cri.go:89] found id: ""
	I0429 20:07:43.338211   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.338222   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:43.338229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:43.338276   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:43.376919   66615 cri.go:89] found id: ""
	I0429 20:07:43.376946   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.376958   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:43.376966   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:43.377032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:43.417421   66615 cri.go:89] found id: ""
	I0429 20:07:43.417450   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.417457   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:43.417465   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:43.417478   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:43.470009   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:43.470040   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:43.486059   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:43.486109   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:43.561688   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:43.561709   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:43.561725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:43.649713   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:43.649750   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:46.194996   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:46.210261   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:46.210342   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:46.249208   66615 cri.go:89] found id: ""
	I0429 20:07:46.249240   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.249253   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:46.249260   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:46.249336   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:46.287285   66615 cri.go:89] found id: ""
	I0429 20:07:46.287315   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.287328   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:46.287335   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:46.287397   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:46.327944   66615 cri.go:89] found id: ""
	I0429 20:07:46.327976   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.327988   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:46.327996   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:46.328061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:46.373875   66615 cri.go:89] found id: ""
	I0429 20:07:46.373899   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.373908   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:46.373914   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:46.373967   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:46.413748   66615 cri.go:89] found id: ""
	I0429 20:07:46.413774   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.413783   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:46.413789   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:46.413853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:46.459380   66615 cri.go:89] found id: ""
	I0429 20:07:46.459412   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.459424   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:46.459432   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:46.459496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:46.499833   66615 cri.go:89] found id: ""
	I0429 20:07:46.499861   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.499870   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:46.499876   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:46.499939   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:46.541025   66615 cri.go:89] found id: ""
	I0429 20:07:46.541055   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.541068   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:46.541080   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:46.541096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:46.601187   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:46.601224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:46.617399   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:46.617426   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:46.697076   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:46.697113   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:46.697129   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:46.783265   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:46.783303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.335795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:49.350030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:49.350116   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:49.390278   66615 cri.go:89] found id: ""
	I0429 20:07:49.390315   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.390326   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:49.390333   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:49.390388   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:49.431145   66615 cri.go:89] found id: ""
	I0429 20:07:49.431175   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.431186   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:49.431193   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:49.431252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:49.473965   66615 cri.go:89] found id: ""
	I0429 20:07:49.473997   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.474014   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:49.474022   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:49.474105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:49.515372   66615 cri.go:89] found id: ""
	I0429 20:07:49.515407   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.515419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:49.515427   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:49.515487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:49.552541   66615 cri.go:89] found id: ""
	I0429 20:07:49.552567   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.552576   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:49.552582   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:49.552650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:49.599628   66615 cri.go:89] found id: ""
	I0429 20:07:49.599660   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.599672   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:49.599680   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:49.599745   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:49.642705   66615 cri.go:89] found id: ""
	I0429 20:07:49.642741   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.642752   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:49.642759   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:49.642827   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:49.679864   66615 cri.go:89] found id: ""
	I0429 20:07:49.679888   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.679896   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:49.679905   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:49.679919   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:49.765967   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:49.765986   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:49.766010   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:49.852739   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:49.852779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.905586   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:49.905613   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:49.959443   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:49.959474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.476677   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:52.491378   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:52.491458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:52.535801   66615 cri.go:89] found id: ""
	I0429 20:07:52.535827   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.535835   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:52.535841   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:52.535901   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:52.582895   66615 cri.go:89] found id: ""
	I0429 20:07:52.582932   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.582944   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:52.582952   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:52.583022   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:52.627070   66615 cri.go:89] found id: ""
	I0429 20:07:52.627096   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.627113   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:52.627120   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:52.627181   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:52.673312   66615 cri.go:89] found id: ""
	I0429 20:07:52.673339   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.673348   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:52.673353   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:52.673399   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:52.713099   66615 cri.go:89] found id: ""
	I0429 20:07:52.713124   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.713131   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:52.713139   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:52.713205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:52.761982   66615 cri.go:89] found id: ""
	I0429 20:07:52.762007   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.762017   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:52.762024   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:52.762108   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:52.801019   66615 cri.go:89] found id: ""
	I0429 20:07:52.801048   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.801059   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:52.801067   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:52.801141   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:52.842544   66615 cri.go:89] found id: ""
	I0429 20:07:52.842578   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.842602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:52.842613   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:52.842630   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:52.896409   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:52.896442   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.912625   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:52.912650   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:52.992231   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:52.992260   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:52.992276   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:53.077473   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:53.077507   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:55.625557   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:55.640211   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:55.640284   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:55.683215   66615 cri.go:89] found id: ""
	I0429 20:07:55.683250   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.683259   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:55.683275   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:55.683341   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:55.730820   66615 cri.go:89] found id: ""
	I0429 20:07:55.730851   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.730862   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:55.730869   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:55.730928   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:55.771784   66615 cri.go:89] found id: ""
	I0429 20:07:55.771808   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.771816   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:55.771821   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:55.771866   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:55.814988   66615 cri.go:89] found id: ""
	I0429 20:07:55.815021   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.815034   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:55.815042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:55.815114   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:55.859293   66615 cri.go:89] found id: ""
	I0429 20:07:55.859327   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.859340   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:55.859349   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:55.859416   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:55.901802   66615 cri.go:89] found id: ""
	I0429 20:07:55.901833   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.901844   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:55.901852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:55.901921   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:55.943863   66615 cri.go:89] found id: ""
	I0429 20:07:55.943895   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.943905   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:55.943913   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:55.943977   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:55.986256   66615 cri.go:89] found id: ""
	I0429 20:07:55.986284   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.986296   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:55.986314   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:55.986332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:56.036710   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:56.036742   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:56.099909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:56.099945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:56.117630   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:56.117660   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:56.197396   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.197421   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:56.197436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:58.779065   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:58.794086   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:58.794168   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:58.844035   66615 cri.go:89] found id: ""
	I0429 20:07:58.844062   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.844070   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:58.844076   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:58.844133   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:58.887859   66615 cri.go:89] found id: ""
	I0429 20:07:58.887889   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.887900   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:58.887906   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:58.887991   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:58.929039   66615 cri.go:89] found id: ""
	I0429 20:07:58.929072   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.929083   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:58.929092   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:58.929152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:58.965930   66615 cri.go:89] found id: ""
	I0429 20:07:58.965975   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.965983   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:58.965989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:58.966061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:59.005583   66615 cri.go:89] found id: ""
	I0429 20:07:59.005616   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.005628   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:59.005638   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:59.005697   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:59.047964   66615 cri.go:89] found id: ""
	I0429 20:07:59.047994   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.048007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:59.048014   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:59.048077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:59.091851   66615 cri.go:89] found id: ""
	I0429 20:07:59.091891   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.091904   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:59.091909   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:59.091978   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:59.134843   66615 cri.go:89] found id: ""
	I0429 20:07:59.134874   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.134881   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:59.134890   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:59.134907   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:59.219048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:59.219084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:59.267404   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:59.267436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:59.322264   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:59.322303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:59.339196   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:59.339235   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:59.441904   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:01.942998   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:01.957442   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:01.957502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:02.002240   66615 cri.go:89] found id: ""
	I0429 20:08:02.002271   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.002283   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:02.002291   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:02.002353   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:02.051506   66615 cri.go:89] found id: ""
	I0429 20:08:02.051535   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.051546   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:02.051552   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:02.051611   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:02.093194   66615 cri.go:89] found id: ""
	I0429 20:08:02.093234   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.093247   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:02.093254   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:02.093317   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:02.134988   66615 cri.go:89] found id: ""
	I0429 20:08:02.135016   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.135027   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:02.135034   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:02.135099   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:02.182954   66615 cri.go:89] found id: ""
	I0429 20:08:02.182982   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.182993   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:02.183000   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:02.183063   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:02.227778   66615 cri.go:89] found id: ""
	I0429 20:08:02.227807   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.227817   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:02.227826   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:02.227888   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:02.265593   66615 cri.go:89] found id: ""
	I0429 20:08:02.265624   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.265634   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:02.265641   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:02.265701   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:02.306520   66615 cri.go:89] found id: ""
	I0429 20:08:02.306550   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.306558   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:02.306566   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:02.306578   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:02.323806   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:02.323844   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:02.407110   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:02.407140   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:02.407153   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:02.493755   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:02.493791   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:02.538610   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:02.538640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:05.096630   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:05.111112   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:05.111173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:05.151237   66615 cri.go:89] found id: ""
	I0429 20:08:05.151268   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.151279   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:05.151286   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:05.151370   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:05.205344   66615 cri.go:89] found id: ""
	I0429 20:08:05.205379   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.205389   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:05.205396   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:05.205478   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:05.244394   66615 cri.go:89] found id: ""
	I0429 20:08:05.244426   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.244438   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:05.244445   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:05.244504   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:05.285320   66615 cri.go:89] found id: ""
	I0429 20:08:05.285343   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.285350   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:05.285356   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:05.285404   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:05.327618   66615 cri.go:89] found id: ""
	I0429 20:08:05.327645   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.327657   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:05.327664   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:05.327742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:05.369152   66615 cri.go:89] found id: ""
	I0429 20:08:05.369178   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.369194   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:05.369208   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:05.369277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:05.407206   66615 cri.go:89] found id: ""
	I0429 20:08:05.407234   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.407243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:05.407248   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:05.407299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:05.447404   66615 cri.go:89] found id: ""
	I0429 20:08:05.447438   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.447449   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:05.447459   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:05.447475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:05.529660   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:05.529700   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.582510   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:05.582565   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:05.639300   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:05.639351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:05.656825   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:05.656860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:05.730863   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.231635   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:08.247722   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:08.247811   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:08.298354   66615 cri.go:89] found id: ""
	I0429 20:08:08.298382   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.298395   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:08.298401   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:08.298459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:08.339497   66615 cri.go:89] found id: ""
	I0429 20:08:08.339536   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.339549   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:08.339556   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:08.339609   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:08.379665   66615 cri.go:89] found id: ""
	I0429 20:08:08.379695   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.379705   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:08.379712   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:08.379786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:08.419698   66615 cri.go:89] found id: ""
	I0429 20:08:08.419722   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.419732   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:08.419739   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:08.419798   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:08.463901   66615 cri.go:89] found id: ""
	I0429 20:08:08.463935   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.463946   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:08.463953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:08.464028   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:08.504568   66615 cri.go:89] found id: ""
	I0429 20:08:08.504603   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.504617   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:08.504626   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:08.504695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:08.545634   66615 cri.go:89] found id: ""
	I0429 20:08:08.545661   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.545671   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:08.545678   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:08.545741   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:08.586936   66615 cri.go:89] found id: ""
	I0429 20:08:08.586965   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.586976   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:08.586987   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:08.587003   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:08.641755   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:08.641794   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:08.659798   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:08.659845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:08.744265   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.744288   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:08.744303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:08.823813   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:08.823860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:11.375600   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:11.396286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:11.396351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:11.442737   66615 cri.go:89] found id: ""
	I0429 20:08:11.442781   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.442789   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:11.442797   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:11.442865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:11.484131   66615 cri.go:89] found id: ""
	I0429 20:08:11.484158   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.484167   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:11.484172   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:11.484231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:11.526647   66615 cri.go:89] found id: ""
	I0429 20:08:11.526684   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.526695   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:11.526705   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:11.526777   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:11.572001   66615 cri.go:89] found id: ""
	I0429 20:08:11.572028   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.572036   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:11.572042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:11.572100   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:11.618980   66615 cri.go:89] found id: ""
	I0429 20:08:11.619003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.619011   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:11.619016   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:11.619077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:11.667079   66615 cri.go:89] found id: ""
	I0429 20:08:11.667107   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.667115   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:11.667123   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:11.667198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:11.707967   66615 cri.go:89] found id: ""
	I0429 20:08:11.708003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.708013   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:11.708020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:11.708073   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:11.753024   66615 cri.go:89] found id: ""
	I0429 20:08:11.753053   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.753062   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:11.753070   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:11.753081   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:11.820171   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:11.820210   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:11.852234   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:11.852263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:11.971060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:11.971085   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:11.971097   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:12.049797   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:12.049845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:14.601181   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:14.621413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:14.621496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:14.677453   66615 cri.go:89] found id: ""
	I0429 20:08:14.677486   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.677498   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:14.677504   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:14.677562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:14.720517   66615 cri.go:89] found id: ""
	I0429 20:08:14.720548   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.720560   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:14.720571   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:14.720636   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:14.770186   66615 cri.go:89] found id: ""
	I0429 20:08:14.770211   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.770219   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:14.770225   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:14.770301   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:14.815286   66615 cri.go:89] found id: ""
	I0429 20:08:14.815310   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.815320   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:14.815327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:14.815389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:14.862625   66615 cri.go:89] found id: ""
	I0429 20:08:14.862651   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.862662   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:14.862669   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:14.862726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:14.910517   66615 cri.go:89] found id: ""
	I0429 20:08:14.910554   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.910565   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:14.910572   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:14.910634   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:14.951085   66615 cri.go:89] found id: ""
	I0429 20:08:14.951110   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.951119   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:14.951124   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:14.951173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:14.991414   66615 cri.go:89] found id: ""
	I0429 20:08:14.991443   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.991455   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:14.991464   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:14.991476   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:15.047551   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:15.047583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:15.063667   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:15.063692   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:15.141744   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:15.141820   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:15.141841   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:15.225676   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:15.225722   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:17.774459   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:17.793137   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:17.793210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:17.856725   66615 cri.go:89] found id: ""
	I0429 20:08:17.856756   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.856767   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:17.856774   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:17.856835   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:17.916510   66615 cri.go:89] found id: ""
	I0429 20:08:17.916542   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.916554   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:17.916561   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:17.916646   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:17.970835   66615 cri.go:89] found id: ""
	I0429 20:08:17.970867   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.970877   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:17.970884   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:17.970948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:18.013324   66615 cri.go:89] found id: ""
	I0429 20:08:18.013353   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.013366   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:18.013384   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:18.013458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:18.062930   66615 cri.go:89] found id: ""
	I0429 20:08:18.062957   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.062968   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:18.062974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:18.063040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:18.111792   66615 cri.go:89] found id: ""
	I0429 20:08:18.111820   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.111829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:18.111834   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:18.111911   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:18.160096   66615 cri.go:89] found id: ""
	I0429 20:08:18.160121   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.160129   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:18.160135   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:18.160198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:18.204012   66615 cri.go:89] found id: ""
	I0429 20:08:18.204044   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.204052   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:18.204062   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:18.204074   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:18.284288   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:18.284337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:18.340746   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:18.340779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:18.397612   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:18.397652   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:18.413425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:18.413455   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:18.493598   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:20.994339   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:21.010199   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:21.010289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:21.052190   66615 cri.go:89] found id: ""
	I0429 20:08:21.052219   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.052230   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:21.052237   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:21.052300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:21.090838   66615 cri.go:89] found id: ""
	I0429 20:08:21.090870   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.090882   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:21.090889   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:21.090953   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:21.137997   66615 cri.go:89] found id: ""
	I0429 20:08:21.138044   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.138056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:21.138082   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:21.138171   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:21.176278   66615 cri.go:89] found id: ""
	I0429 20:08:21.176311   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.176323   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:21.176331   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:21.176390   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:21.213925   66615 cri.go:89] found id: ""
	I0429 20:08:21.213955   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.213966   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:21.213973   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:21.214039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:21.253815   66615 cri.go:89] found id: ""
	I0429 20:08:21.253842   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.253850   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:21.253857   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:21.253905   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:21.296521   66615 cri.go:89] found id: ""
	I0429 20:08:21.296553   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.296565   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:21.296573   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:21.296633   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:21.337114   66615 cri.go:89] found id: ""
	I0429 20:08:21.337143   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.337150   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:21.337158   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:21.337177   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.384860   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:21.384901   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:21.443837   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:21.443899   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:21.460084   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:21.460116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:21.541230   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:21.541262   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:21.541278   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.132057   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:24.148381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:24.148458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:24.192469   66615 cri.go:89] found id: ""
	I0429 20:08:24.192499   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.192510   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:24.192516   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:24.192568   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:24.232150   66615 cri.go:89] found id: ""
	I0429 20:08:24.232177   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.232188   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:24.232195   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:24.232260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:24.272679   66615 cri.go:89] found id: ""
	I0429 20:08:24.272705   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.272714   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:24.272719   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:24.272772   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:24.317114   66615 cri.go:89] found id: ""
	I0429 20:08:24.317137   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.317145   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:24.317151   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:24.317200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:24.362251   66615 cri.go:89] found id: ""
	I0429 20:08:24.362279   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.362287   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:24.362294   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:24.362346   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:24.405696   66615 cri.go:89] found id: ""
	I0429 20:08:24.405721   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.405729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:24.405734   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:24.405828   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:24.446837   66615 cri.go:89] found id: ""
	I0429 20:08:24.446864   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.446871   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:24.446878   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:24.446929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:24.493416   66615 cri.go:89] found id: ""
	I0429 20:08:24.493445   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.493454   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:24.493462   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:24.493475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:24.555657   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:24.555693   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:24.572297   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:24.572328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:24.658463   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:24.658487   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:24.658499   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.752064   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:24.752103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:27.303812   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:27.319304   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:27.319373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:27.360473   66615 cri.go:89] found id: ""
	I0429 20:08:27.360509   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.360521   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:27.360529   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:27.360595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:27.404619   66615 cri.go:89] found id: ""
	I0429 20:08:27.404651   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.404668   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:27.404675   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:27.404742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:27.447464   66615 cri.go:89] found id: ""
	I0429 20:08:27.447490   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.447498   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:27.447503   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:27.447556   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:27.489197   66615 cri.go:89] found id: ""
	I0429 20:08:27.489235   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.489246   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:27.489253   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:27.489323   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:27.534354   66615 cri.go:89] found id: ""
	I0429 20:08:27.534387   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.534397   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:27.534404   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:27.534470   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:27.580721   66615 cri.go:89] found id: ""
	I0429 20:08:27.580751   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.580762   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:27.580769   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:27.580841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:27.620000   66615 cri.go:89] found id: ""
	I0429 20:08:27.620033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.620041   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:27.620046   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:27.620096   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:27.659000   66615 cri.go:89] found id: ""
	I0429 20:08:27.659033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.659041   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:27.659050   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:27.659062   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:27.739202   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:27.739241   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:27.784761   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:27.784807   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:27.842707   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:27.842748   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:27.859471   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:27.859498   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:27.942686   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:30.443410   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:30.460332   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:30.460417   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:30.497715   66615 cri.go:89] found id: ""
	I0429 20:08:30.497752   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.497764   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:30.497772   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:30.497841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:30.539376   66615 cri.go:89] found id: ""
	I0429 20:08:30.539409   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.539419   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:30.539426   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:30.539492   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:30.587567   66615 cri.go:89] found id: ""
	I0429 20:08:30.587596   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.587606   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:30.587616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:30.587679   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:30.626198   66615 cri.go:89] found id: ""
	I0429 20:08:30.626228   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.626238   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:30.626246   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:30.626313   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:30.665798   66615 cri.go:89] found id: ""
	I0429 20:08:30.665829   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.665837   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:30.665843   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:30.665909   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:30.708627   66615 cri.go:89] found id: ""
	I0429 20:08:30.708659   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.708671   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:30.708679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:30.708762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:30.754190   66615 cri.go:89] found id: ""
	I0429 20:08:30.754220   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.754230   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:30.754236   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:30.754295   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:30.797383   66615 cri.go:89] found id: ""
	I0429 20:08:30.797410   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.797421   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:30.797432   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:30.797447   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.843485   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:30.843512   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:30.900081   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:30.900118   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:30.916095   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:30.916125   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:30.995509   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:30.995529   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:30.995541   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:33.584596   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:33.600969   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:33.601058   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:33.643935   66615 cri.go:89] found id: ""
	I0429 20:08:33.643967   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.643979   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:33.643986   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:33.644049   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:33.681047   66615 cri.go:89] found id: ""
	I0429 20:08:33.681077   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.681085   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:33.681091   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:33.681160   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:33.726450   66615 cri.go:89] found id: ""
	I0429 20:08:33.726479   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.726490   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:33.726501   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:33.726561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:33.765237   66615 cri.go:89] found id: ""
	I0429 20:08:33.765264   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.765275   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:33.765281   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:33.765339   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:33.808333   66615 cri.go:89] found id: ""
	I0429 20:08:33.808366   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.808376   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:33.808383   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:33.808446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:33.854991   66615 cri.go:89] found id: ""
	I0429 20:08:33.855023   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.855034   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:33.855041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:33.855126   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:33.895405   66615 cri.go:89] found id: ""
	I0429 20:08:33.895434   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.895446   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:33.895455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:33.895521   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:33.937265   66615 cri.go:89] found id: ""
	I0429 20:08:33.937289   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.937297   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:33.937306   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:33.937324   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:33.991565   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:33.991594   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:34.006316   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:34.006343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:34.088734   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:34.088762   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:34.088776   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:34.180451   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:34.180489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:36.727080   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:36.743038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:36.743124   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:36.785441   66615 cri.go:89] found id: ""
	I0429 20:08:36.785465   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.785475   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:36.785482   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:36.785542   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:36.828787   66615 cri.go:89] found id: ""
	I0429 20:08:36.828819   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.828829   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:36.828836   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:36.828896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:36.867712   66615 cri.go:89] found id: ""
	I0429 20:08:36.867738   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.867749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:36.867756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:36.867825   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:36.911435   66615 cri.go:89] found id: ""
	I0429 20:08:36.911462   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.911472   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:36.911478   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:36.911560   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:36.953803   66615 cri.go:89] found id: ""
	I0429 20:08:36.953828   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.953836   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:36.953842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:36.953903   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:36.990305   66615 cri.go:89] found id: ""
	I0429 20:08:36.990329   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.990339   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:36.990347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:36.990434   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:37.029177   66615 cri.go:89] found id: ""
	I0429 20:08:37.029206   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.029225   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:37.029232   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:37.029294   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:37.067583   66615 cri.go:89] found id: ""
	I0429 20:08:37.067605   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.067612   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:37.067619   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:37.067631   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:37.144739   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:37.144776   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:37.144788   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:37.227724   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:37.227762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:37.270383   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:37.270417   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:37.326858   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:37.326890   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:39.843323   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:39.859899   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:39.859961   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:39.903125   66615 cri.go:89] found id: ""
	I0429 20:08:39.903155   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.903164   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:39.903169   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:39.903243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:39.944271   66615 cri.go:89] found id: ""
	I0429 20:08:39.944300   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.944309   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:39.944314   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:39.944363   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:39.989934   66615 cri.go:89] found id: ""
	I0429 20:08:39.989964   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.989972   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:39.989978   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:39.990032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:40.025936   66615 cri.go:89] found id: ""
	I0429 20:08:40.025965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.025976   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:40.025983   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:40.026044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:40.065943   66615 cri.go:89] found id: ""
	I0429 20:08:40.065965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.065976   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:40.065984   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:40.066038   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:40.109986   66615 cri.go:89] found id: ""
	I0429 20:08:40.110018   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.110030   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:40.110038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:40.110115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:40.155610   66615 cri.go:89] found id: ""
	I0429 20:08:40.155716   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.155734   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:40.155745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:40.155803   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:40.196213   66615 cri.go:89] found id: ""
	I0429 20:08:40.196239   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.196246   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:40.196256   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:40.196272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:40.280330   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:40.280372   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:40.326774   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:40.326810   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:40.379438   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:40.379475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:40.395332   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:40.395362   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:40.504413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:43.005046   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:43.020464   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:43.020544   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:43.066403   66615 cri.go:89] found id: ""
	I0429 20:08:43.066432   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.066444   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:43.066452   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:43.066548   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:43.109732   66615 cri.go:89] found id: ""
	I0429 20:08:43.109760   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.109771   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:43.109778   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:43.109850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:43.158457   66615 cri.go:89] found id: ""
	I0429 20:08:43.158483   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.158492   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:43.158498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:43.158561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:43.207170   66615 cri.go:89] found id: ""
	I0429 20:08:43.207201   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.207213   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:43.207221   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:43.207281   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:43.246746   66615 cri.go:89] found id: ""
	I0429 20:08:43.246783   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.246804   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:43.246811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:43.246875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:43.292786   66615 cri.go:89] found id: ""
	I0429 20:08:43.292813   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.292824   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:43.292831   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:43.292896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:43.337509   66615 cri.go:89] found id: ""
	I0429 20:08:43.337537   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.337546   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:43.337551   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:43.337601   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:43.378446   66615 cri.go:89] found id: ""
	I0429 20:08:43.378473   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.378481   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:43.378490   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:43.378502   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:43.460438   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:43.460474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:43.503908   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:43.503945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:43.561661   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:43.561699   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:43.577924   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:43.577954   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:43.667006   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:46.168175   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:46.212494   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:46.212579   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:46.251567   66615 cri.go:89] found id: ""
	I0429 20:08:46.251593   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.251603   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:46.251610   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:46.251673   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:46.291913   66615 cri.go:89] found id: ""
	I0429 20:08:46.291943   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.291955   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:46.291962   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:46.292023   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:46.331801   66615 cri.go:89] found id: ""
	I0429 20:08:46.331827   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.331836   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:46.331842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:46.331899   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:46.375956   66615 cri.go:89] found id: ""
	I0429 20:08:46.375989   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.376001   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:46.376008   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:46.376090   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:46.425572   66615 cri.go:89] found id: ""
	I0429 20:08:46.425599   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.425609   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:46.425618   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:46.425681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:46.468161   66615 cri.go:89] found id: ""
	I0429 20:08:46.468226   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.468249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:46.468263   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:46.468433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:46.512163   66615 cri.go:89] found id: ""
	I0429 20:08:46.512193   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.512205   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:46.512212   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:46.512277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:46.556047   66615 cri.go:89] found id: ""
	I0429 20:08:46.556078   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.556088   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:46.556099   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:46.556111   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:46.609886   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:46.609921   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:46.625848   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:46.625878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:46.699005   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:46.699037   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:46.699053   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:46.783886   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:46.783923   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.331288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:49.344805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:49.344864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:49.381576   66615 cri.go:89] found id: ""
	I0429 20:08:49.381598   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.381605   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:49.381619   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:49.381667   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:49.418276   66615 cri.go:89] found id: ""
	I0429 20:08:49.418316   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.418329   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:49.418336   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:49.418389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:49.460147   66615 cri.go:89] found id: ""
	I0429 20:08:49.460177   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.460188   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:49.460195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:49.460253   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:49.500534   66615 cri.go:89] found id: ""
	I0429 20:08:49.500562   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.500569   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:49.500575   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:49.500632   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:49.538481   66615 cri.go:89] found id: ""
	I0429 20:08:49.538521   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.538534   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:49.538541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:49.538603   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:49.580192   66615 cri.go:89] found id: ""
	I0429 20:08:49.580218   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.580228   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:49.580234   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:49.580299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:49.616400   66615 cri.go:89] found id: ""
	I0429 20:08:49.616427   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.616437   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:49.616444   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:49.616551   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:49.652871   66615 cri.go:89] found id: ""
	I0429 20:08:49.652900   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.652918   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:49.652931   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:49.652947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:49.728173   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:49.728200   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:49.728212   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:49.813701   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:49.813749   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.855685   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:49.855712   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:49.906480   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:49.906514   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:52.422430   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:52.437412   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:52.437488   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:52.476896   66615 cri.go:89] found id: ""
	I0429 20:08:52.476919   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.476927   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:52.476932   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:52.476976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:52.517266   66615 cri.go:89] found id: ""
	I0429 20:08:52.517298   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.517310   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:52.517318   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:52.517381   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:52.560886   66615 cri.go:89] found id: ""
	I0429 20:08:52.560909   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.560917   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:52.560922   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:52.560969   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:52.601362   66615 cri.go:89] found id: ""
	I0429 20:08:52.601398   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.601419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:52.601429   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:52.601506   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:52.639544   66615 cri.go:89] found id: ""
	I0429 20:08:52.639580   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.639591   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:52.639599   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:52.639652   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:52.681088   66615 cri.go:89] found id: ""
	I0429 20:08:52.681120   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.681130   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:52.681138   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:52.681204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:52.721777   66615 cri.go:89] found id: ""
	I0429 20:08:52.721802   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.721820   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:52.721828   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:52.721900   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:52.762823   66615 cri.go:89] found id: ""
	I0429 20:08:52.762845   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.762856   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:52.762863   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:52.762875   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:52.819291   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:52.819326   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:52.847120   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:52.847165   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:52.956274   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:52.956301   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:52.956317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:53.041636   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:53.041676   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:55.592636   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:55.607372   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:55.607449   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:55.643959   66615 cri.go:89] found id: ""
	I0429 20:08:55.643991   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.644000   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:55.644005   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:55.644061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:55.682272   66615 cri.go:89] found id: ""
	I0429 20:08:55.682304   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.682315   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:55.682323   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:55.682384   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:55.720157   66615 cri.go:89] found id: ""
	I0429 20:08:55.720189   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.720200   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:55.720207   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:55.720272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:55.761748   66615 cri.go:89] found id: ""
	I0429 20:08:55.761773   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.761781   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:55.761786   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:55.761842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:55.802377   66615 cri.go:89] found id: ""
	I0429 20:08:55.802405   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.802416   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:55.802423   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:55.802494   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:55.838986   66615 cri.go:89] found id: ""
	I0429 20:08:55.839016   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.839024   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:55.839030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:55.839077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:55.874991   66615 cri.go:89] found id: ""
	I0429 20:08:55.875022   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.875032   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:55.875039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:55.875106   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:55.913561   66615 cri.go:89] found id: ""
	I0429 20:08:55.913595   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.913607   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:55.913618   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:55.913633   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:55.965355   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:55.965391   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:55.981222   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:55.981259   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:56.056656   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:56.056685   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:56.056701   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.135276   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:56.135309   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:58.682855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:58.701679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:58.701769   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:58.760807   66615 cri.go:89] found id: ""
	I0429 20:08:58.760828   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.760841   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:58.760858   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:58.760910   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:58.835167   66615 cri.go:89] found id: ""
	I0429 20:08:58.835204   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.835216   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:58.835223   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:58.835289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:58.877367   66615 cri.go:89] found id: ""
	I0429 20:08:58.877398   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.877409   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:58.877417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:58.877483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:58.923726   66615 cri.go:89] found id: ""
	I0429 20:08:58.923751   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.923760   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:58.923766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:58.923817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:58.967780   66615 cri.go:89] found id: ""
	I0429 20:08:58.967804   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.967811   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:58.967816   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:58.967865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:59.010646   66615 cri.go:89] found id: ""
	I0429 20:08:59.010682   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.010690   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:59.010697   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:59.010759   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:59.057380   66615 cri.go:89] found id: ""
	I0429 20:08:59.057408   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.057418   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:59.057426   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:59.057483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:59.099669   66615 cri.go:89] found id: ""
	I0429 20:08:59.099698   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.099706   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:59.099715   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:59.099731   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:59.146831   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:59.146861   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:59.204232   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:59.204274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:59.219799   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:59.219824   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:59.305438   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:59.305465   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:59.305481   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:01.885861   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:01.900746   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:01.900808   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:01.942174   66615 cri.go:89] found id: ""
	I0429 20:09:01.942210   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.942218   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:01.942224   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:01.942285   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:01.986463   66615 cri.go:89] found id: ""
	I0429 20:09:01.986491   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.986502   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:01.986509   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:01.986570   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:02.026290   66615 cri.go:89] found id: ""
	I0429 20:09:02.026314   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.026321   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:02.026327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:02.026375   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:02.064239   66615 cri.go:89] found id: ""
	I0429 20:09:02.064259   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.064266   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:02.064271   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:02.064321   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:02.105807   66615 cri.go:89] found id: ""
	I0429 20:09:02.105838   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.105857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:02.105866   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:02.105926   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:02.144939   66615 cri.go:89] found id: ""
	I0429 20:09:02.144962   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.144970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:02.144975   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:02.145037   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:02.192866   66615 cri.go:89] found id: ""
	I0429 20:09:02.192891   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.192899   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:02.192905   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:02.192955   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:02.232485   66615 cri.go:89] found id: ""
	I0429 20:09:02.232515   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.232524   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:02.232533   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:02.232550   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:02.287374   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:02.287402   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:02.302979   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:02.303009   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:02.380693   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:02.380713   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:02.380725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:02.467048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:02.467084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:05.018176   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:05.033178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:05.033238   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:05.079008   66615 cri.go:89] found id: ""
	I0429 20:09:05.079034   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.079043   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:05.079050   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:05.079113   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:05.118620   66615 cri.go:89] found id: ""
	I0429 20:09:05.118642   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.118650   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:05.118655   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:05.118714   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:05.159603   66615 cri.go:89] found id: ""
	I0429 20:09:05.159646   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.159660   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:05.159666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:05.159733   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:05.200224   66615 cri.go:89] found id: ""
	I0429 20:09:05.200252   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.200262   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:05.200270   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:05.200344   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:05.246341   66615 cri.go:89] found id: ""
	I0429 20:09:05.246384   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.246396   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:05.246403   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:05.246471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:05.286126   66615 cri.go:89] found id: ""
	I0429 20:09:05.286153   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.286163   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:05.286171   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:05.286235   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:05.326911   66615 cri.go:89] found id: ""
	I0429 20:09:05.326941   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.326952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:05.326958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:05.327019   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:05.365564   66615 cri.go:89] found id: ""
	I0429 20:09:05.365592   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.365602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:05.365621   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:05.365637   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:05.445857   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:05.445877   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:05.445889   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:05.530129   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:05.530164   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:05.573936   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:05.573971   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:05.631263   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:05.631299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.147288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:08.162949   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:08.163021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:08.203009   66615 cri.go:89] found id: ""
	I0429 20:09:08.203033   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.203041   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:08.203047   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:08.203112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:08.241708   66615 cri.go:89] found id: ""
	I0429 20:09:08.241735   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.241744   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:08.241750   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:08.241801   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:08.283976   66615 cri.go:89] found id: ""
	I0429 20:09:08.284005   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.284017   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:08.284023   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:08.284091   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:08.323909   66615 cri.go:89] found id: ""
	I0429 20:09:08.323939   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.323951   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:08.323962   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:08.324031   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:08.363236   66615 cri.go:89] found id: ""
	I0429 20:09:08.363263   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.363271   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:08.363276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:08.363328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:08.401767   66615 cri.go:89] found id: ""
	I0429 20:09:08.401790   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.401798   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:08.401803   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:08.401851   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:08.443678   66615 cri.go:89] found id: ""
	I0429 20:09:08.443709   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.443726   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:08.443731   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:08.443791   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:08.489025   66615 cri.go:89] found id: ""
	I0429 20:09:08.489069   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.489103   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:08.489129   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:08.489163   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:08.543421   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:08.543462   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.560425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:08.560459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:08.642819   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:08.642840   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:08.642855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:08.726644   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:08.726682   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:11.277817   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:11.292340   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:11.292420   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:11.330721   66615 cri.go:89] found id: ""
	I0429 20:09:11.330756   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.330768   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:11.330776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:11.330850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:11.372057   66615 cri.go:89] found id: ""
	I0429 20:09:11.372089   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.372098   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:11.372103   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:11.372155   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:11.414786   66615 cri.go:89] found id: ""
	I0429 20:09:11.414814   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.414825   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:11.414832   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:11.414898   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:11.454934   66615 cri.go:89] found id: ""
	I0429 20:09:11.454961   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.454969   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:11.454974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:11.455039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:11.494169   66615 cri.go:89] found id: ""
	I0429 20:09:11.494200   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.494211   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:11.494217   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:11.494277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:11.541646   66615 cri.go:89] found id: ""
	I0429 20:09:11.541684   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.541694   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:11.541701   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:11.541766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:11.584025   66615 cri.go:89] found id: ""
	I0429 20:09:11.584055   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.584067   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:11.584075   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:11.584138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:11.622425   66615 cri.go:89] found id: ""
	I0429 20:09:11.622459   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.622471   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:11.622481   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:11.622493   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:11.676416   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:11.676450   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:11.693793   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:11.693822   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:11.771410   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:11.771437   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:11.771454   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:11.854969   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:11.855047   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:14.398871   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:14.415894   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:14.415983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:14.454718   66615 cri.go:89] found id: ""
	I0429 20:09:14.454752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.454763   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:14.454773   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:14.454836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:14.498562   66615 cri.go:89] found id: ""
	I0429 20:09:14.498591   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.498602   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:14.498609   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:14.498669   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:14.536357   66615 cri.go:89] found id: ""
	I0429 20:09:14.536384   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.536395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:14.536402   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:14.536460   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:14.577240   66615 cri.go:89] found id: ""
	I0429 20:09:14.577274   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.577284   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:14.577291   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:14.577372   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:14.617231   66615 cri.go:89] found id: ""
	I0429 20:09:14.617266   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.617279   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:14.617287   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:14.617355   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:14.659053   66615 cri.go:89] found id: ""
	I0429 20:09:14.659081   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.659090   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:14.659096   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:14.659145   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:14.708723   66615 cri.go:89] found id: ""
	I0429 20:09:14.708752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.708760   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:14.708766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:14.708814   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:14.753732   66615 cri.go:89] found id: ""
	I0429 20:09:14.753762   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.753773   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:14.753783   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:14.753798   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:14.771952   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:14.771985   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:14.842649   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:14.842680   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:14.842696   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:14.925565   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:14.925603   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:14.975731   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:14.975765   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.528872   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:17.544373   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:17.544455   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:17.582977   66615 cri.go:89] found id: ""
	I0429 20:09:17.583001   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.583009   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:17.583014   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:17.583079   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:17.620322   66615 cri.go:89] found id: ""
	I0429 20:09:17.620352   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.620368   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:17.620373   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:17.620421   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:17.664339   66615 cri.go:89] found id: ""
	I0429 20:09:17.664367   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.664375   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:17.664381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:17.664433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:17.705150   66615 cri.go:89] found id: ""
	I0429 20:09:17.705175   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.705184   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:17.705189   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:17.705239   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:17.749713   66615 cri.go:89] found id: ""
	I0429 20:09:17.749738   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.749747   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:17.749752   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:17.749850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:17.791528   66615 cri.go:89] found id: ""
	I0429 20:09:17.791552   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.791560   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:17.791566   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:17.791615   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:17.834994   66615 cri.go:89] found id: ""
	I0429 20:09:17.835024   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.835035   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:17.835050   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:17.835107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:17.872194   66615 cri.go:89] found id: ""
	I0429 20:09:17.872226   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.872236   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:17.872248   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:17.872263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.926899   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:17.926936   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:17.944184   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:17.944218   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:18.029224   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:18.029246   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:18.029258   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:18.111112   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:18.111147   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:20.655965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:20.671420   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:20.671487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:20.710100   66615 cri.go:89] found id: ""
	I0429 20:09:20.710132   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.710144   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:20.710151   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:20.710221   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:20.748849   66615 cri.go:89] found id: ""
	I0429 20:09:20.748877   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.748888   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:20.748894   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:20.748956   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:20.788113   66615 cri.go:89] found id: ""
	I0429 20:09:20.788140   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.788151   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:20.788157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:20.788217   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:20.831432   66615 cri.go:89] found id: ""
	I0429 20:09:20.831455   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.831462   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:20.831470   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:20.831518   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:20.878156   66615 cri.go:89] found id: ""
	I0429 20:09:20.878183   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.878191   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:20.878197   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:20.878262   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:20.920691   66615 cri.go:89] found id: ""
	I0429 20:09:20.920718   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.920729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:20.920735   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:20.920795   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:20.960674   66615 cri.go:89] found id: ""
	I0429 20:09:20.960709   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.960719   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:20.960726   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:20.960786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:21.006462   66615 cri.go:89] found id: ""
	I0429 20:09:21.006486   66615 logs.go:276] 0 containers: []
	W0429 20:09:21.006495   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:21.006503   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:21.006518   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.060040   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:21.060076   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:21.077141   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:21.077171   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:21.157058   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:21.157083   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:21.157096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:21.265626   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:21.265662   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:23.813718   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:23.828338   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:23.828400   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:23.868730   66615 cri.go:89] found id: ""
	I0429 20:09:23.868760   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.868771   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:23.868776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:23.868842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:23.907919   66615 cri.go:89] found id: ""
	I0429 20:09:23.907941   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.907949   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:23.907956   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:23.908011   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:23.956769   66615 cri.go:89] found id: ""
	I0429 20:09:23.956794   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.956805   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:23.956811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:23.956875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:23.998578   66615 cri.go:89] found id: ""
	I0429 20:09:23.998612   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.998621   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:23.998628   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:23.998681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:24.037458   66615 cri.go:89] found id: ""
	I0429 20:09:24.037485   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.037492   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:24.037499   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:24.037562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:24.078305   66615 cri.go:89] found id: ""
	I0429 20:09:24.078336   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.078351   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:24.078358   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:24.078418   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:24.120100   66615 cri.go:89] found id: ""
	I0429 20:09:24.120129   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.120139   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:24.120147   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:24.120211   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:24.160953   66615 cri.go:89] found id: ""
	I0429 20:09:24.160988   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.161000   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:24.161012   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:24.161029   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:24.176654   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:24.176686   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:24.256631   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:24.256652   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:24.256668   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:24.335379   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:24.335424   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:24.379616   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:24.379649   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:26.937283   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:26.956185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:26.956252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:26.997000   66615 cri.go:89] found id: ""
	I0429 20:09:26.997034   66615 logs.go:276] 0 containers: []
	W0429 20:09:26.997046   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:26.997053   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:26.997115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:27.042494   66615 cri.go:89] found id: ""
	I0429 20:09:27.042527   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.042538   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:27.042546   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:27.042608   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:27.086170   66615 cri.go:89] found id: ""
	I0429 20:09:27.086199   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.086211   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:27.086218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:27.086282   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:27.126502   66615 cri.go:89] found id: ""
	I0429 20:09:27.126531   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.126542   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:27.126560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:27.126635   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:27.175102   66615 cri.go:89] found id: ""
	I0429 20:09:27.175134   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.175142   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:27.175148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:27.175216   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:27.215983   66615 cri.go:89] found id: ""
	I0429 20:09:27.216013   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.216025   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:27.216033   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:27.216097   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:27.256427   66615 cri.go:89] found id: ""
	I0429 20:09:27.256456   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.256467   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:27.256474   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:27.256540   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:27.298444   66615 cri.go:89] found id: ""
	I0429 20:09:27.298479   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.298490   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:27.298501   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:27.298517   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:27.381579   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:27.381625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:27.429304   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:27.429350   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:27.483044   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:27.483082   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:27.500304   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:27.500332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:27.583909   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:30.084904   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:30.102417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:30.102486   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:30.146726   66615 cri.go:89] found id: ""
	I0429 20:09:30.146748   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.146755   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:30.146761   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:30.146809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:30.190739   66615 cri.go:89] found id: ""
	I0429 20:09:30.190768   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.190780   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:30.190788   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:30.190853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:30.228836   66615 cri.go:89] found id: ""
	I0429 20:09:30.228864   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.228879   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:30.228887   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:30.228951   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:30.270876   66615 cri.go:89] found id: ""
	I0429 20:09:30.270912   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.270920   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:30.270925   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:30.270995   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:30.310762   66615 cri.go:89] found id: ""
	I0429 20:09:30.310787   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.310795   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:30.310801   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:30.310850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:30.356339   66615 cri.go:89] found id: ""
	I0429 20:09:30.356363   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.356371   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:30.356376   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:30.356430   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:30.395540   66615 cri.go:89] found id: ""
	I0429 20:09:30.395575   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.395589   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:30.395598   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:30.395671   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:30.446237   66615 cri.go:89] found id: ""
	I0429 20:09:30.446263   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.446276   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:30.446286   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:30.446301   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:30.537309   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:30.537334   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:30.537349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:30.629116   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:30.629151   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:30.683308   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:30.683337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:30.735879   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:30.735910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.252322   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:33.268276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:33.268351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:33.309531   66615 cri.go:89] found id: ""
	I0429 20:09:33.309622   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.309641   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:33.309650   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:33.309719   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:33.367480   66615 cri.go:89] found id: ""
	I0429 20:09:33.367515   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.367527   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:33.367535   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:33.367595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:33.433717   66615 cri.go:89] found id: ""
	I0429 20:09:33.433742   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.433751   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:33.433756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:33.433820   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:33.484053   66615 cri.go:89] found id: ""
	I0429 20:09:33.484081   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.484093   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:33.484100   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:33.484165   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:33.524103   66615 cri.go:89] found id: ""
	I0429 20:09:33.524126   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.524136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:33.524143   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:33.524204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:33.565692   66615 cri.go:89] found id: ""
	I0429 20:09:33.565711   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.565719   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:33.565724   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:33.565784   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:33.607119   66615 cri.go:89] found id: ""
	I0429 20:09:33.607143   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.607153   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:33.607160   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:33.607225   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:33.648407   66615 cri.go:89] found id: ""
	I0429 20:09:33.648432   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.648440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:33.648449   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:33.648463   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:33.730744   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:33.730781   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:33.774295   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:33.774328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:33.829609   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:33.829653   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.846048   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:33.846092   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:33.924413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:36.425072   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:36.440185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:36.440268   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:36.484364   66615 cri.go:89] found id: ""
	I0429 20:09:36.484386   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.484394   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:36.484400   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:36.484450   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:36.520436   66615 cri.go:89] found id: ""
	I0429 20:09:36.520466   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.520478   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:36.520487   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:36.520549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:36.563597   66615 cri.go:89] found id: ""
	I0429 20:09:36.563622   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.563630   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:36.563635   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:36.563704   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:36.613106   66615 cri.go:89] found id: ""
	I0429 20:09:36.613134   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.613143   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:36.613148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:36.613204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:36.658127   66615 cri.go:89] found id: ""
	I0429 20:09:36.658151   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.658159   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:36.658166   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:36.658229   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:36.707388   66615 cri.go:89] found id: ""
	I0429 20:09:36.707415   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.707423   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:36.707430   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:36.707479   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:36.753363   66615 cri.go:89] found id: ""
	I0429 20:09:36.753394   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.753405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:36.753413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:36.753475   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:36.801492   66615 cri.go:89] found id: ""
	I0429 20:09:36.801513   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.801521   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:36.801530   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:36.801542   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:36.857055   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:36.857108   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:36.874567   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:36.874595   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:36.956176   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:36.956202   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:36.956217   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:37.039958   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:37.039997   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:39.591442   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:39.607842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:39.607927   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:39.651917   66615 cri.go:89] found id: ""
	I0429 20:09:39.651941   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.651948   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:39.651955   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:39.652020   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:39.690032   66615 cri.go:89] found id: ""
	I0429 20:09:39.690059   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.690078   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:39.690086   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:39.690152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:39.733176   66615 cri.go:89] found id: ""
	I0429 20:09:39.733200   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.733209   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:39.733215   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:39.733261   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:39.779528   66615 cri.go:89] found id: ""
	I0429 20:09:39.779560   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.779572   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:39.779581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:39.779650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:39.822408   66615 cri.go:89] found id: ""
	I0429 20:09:39.822436   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.822445   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:39.822452   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:39.822522   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:39.864895   66615 cri.go:89] found id: ""
	I0429 20:09:39.864922   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.864930   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:39.864938   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:39.865008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:39.907498   66615 cri.go:89] found id: ""
	I0429 20:09:39.907523   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.907533   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:39.907539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:39.907606   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:39.948400   66615 cri.go:89] found id: ""
	I0429 20:09:39.948430   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.948440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:39.948449   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:39.948465   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:39.964733   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:39.964763   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:40.043568   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:40.043593   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:40.043609   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:40.130776   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:40.130815   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:40.182011   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:40.182042   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:42.739068   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:42.756144   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:42.756286   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:42.798776   66615 cri.go:89] found id: ""
	I0429 20:09:42.798801   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.798810   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:42.798815   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:42.798861   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:42.837122   66615 cri.go:89] found id: ""
	I0429 20:09:42.837146   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.837154   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:42.837159   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:42.837205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:42.875435   66615 cri.go:89] found id: ""
	I0429 20:09:42.875461   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.875471   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:42.875479   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:42.875536   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:42.920044   66615 cri.go:89] found id: ""
	I0429 20:09:42.920076   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.920087   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:42.920094   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:42.920175   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:42.960122   66615 cri.go:89] found id: ""
	I0429 20:09:42.960152   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.960163   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:42.960169   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:42.960215   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:42.999784   66615 cri.go:89] found id: ""
	I0429 20:09:42.999811   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.999829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:42.999837   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:42.999917   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:43.040882   66615 cri.go:89] found id: ""
	I0429 20:09:43.040930   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.040952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:43.040959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:43.041044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:43.082596   66615 cri.go:89] found id: ""
	I0429 20:09:43.082627   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.082639   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:43.082650   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:43.082672   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:43.140302   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:43.140343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:43.157508   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:43.157547   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:43.241025   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:43.241047   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:43.241061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:43.325820   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:43.325855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:45.871561   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:45.887323   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:45.887398   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:45.930021   66615 cri.go:89] found id: ""
	I0429 20:09:45.930050   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.930062   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:45.930088   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:45.930148   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:45.971404   66615 cri.go:89] found id: ""
	I0429 20:09:45.971434   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.971445   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:45.971452   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:45.971513   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:46.018801   66615 cri.go:89] found id: ""
	I0429 20:09:46.018825   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.018833   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:46.018838   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:46.018886   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:46.065118   66615 cri.go:89] found id: ""
	I0429 20:09:46.065140   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.065148   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:46.065153   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:46.065201   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:46.105244   66615 cri.go:89] found id: ""
	I0429 20:09:46.105271   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.105294   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:46.105309   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:46.105373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:46.153736   66615 cri.go:89] found id: ""
	I0429 20:09:46.153759   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.153768   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:46.153773   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:46.153836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:46.198940   66615 cri.go:89] found id: ""
	I0429 20:09:46.198965   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.198973   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:46.198979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:46.199064   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:46.238001   66615 cri.go:89] found id: ""
	I0429 20:09:46.238031   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.238044   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:46.238056   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:46.238087   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:46.292309   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:46.292357   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:46.307243   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:46.307274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:46.386832   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.386852   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:46.386869   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:46.468856   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:46.468891   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.017354   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:49.032753   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:49.032832   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:49.075345   66615 cri.go:89] found id: ""
	I0429 20:09:49.075375   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.075388   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:49.075394   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:49.075447   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:49.115294   66615 cri.go:89] found id: ""
	I0429 20:09:49.115328   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.115339   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:49.115347   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:49.115412   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:49.164115   66615 cri.go:89] found id: ""
	I0429 20:09:49.164140   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.164148   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:49.164154   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:49.164210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:49.207643   66615 cri.go:89] found id: ""
	I0429 20:09:49.207668   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.207679   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:49.207698   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:49.207762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:49.247121   66615 cri.go:89] found id: ""
	I0429 20:09:49.247147   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.247156   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:49.247162   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:49.247220   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:49.288594   66615 cri.go:89] found id: ""
	I0429 20:09:49.288626   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.288636   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:49.288643   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:49.288711   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:49.330243   66615 cri.go:89] found id: ""
	I0429 20:09:49.330273   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.330290   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:49.330300   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:49.330365   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:49.371304   66615 cri.go:89] found id: ""
	I0429 20:09:49.371348   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.371360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:49.371372   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:49.371392   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:49.450910   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:49.450949   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.494940   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:49.494970   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:49.553320   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:49.553364   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:49.568850   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:49.568878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:49.644932   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:52.145702   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:52.162681   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:52.162756   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:52.204816   66615 cri.go:89] found id: ""
	I0429 20:09:52.204858   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.204870   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:52.204888   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:52.204963   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:52.248481   66615 cri.go:89] found id: ""
	I0429 20:09:52.248510   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.248519   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:52.248525   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:52.248596   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:52.289158   66615 cri.go:89] found id: ""
	I0429 20:09:52.289186   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.289194   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:52.289200   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:52.289260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:52.329905   66615 cri.go:89] found id: ""
	I0429 20:09:52.329931   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.329942   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:52.329950   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:52.330025   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:52.372523   66615 cri.go:89] found id: ""
	I0429 20:09:52.372546   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.372554   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:52.372560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:52.372623   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:52.414936   66615 cri.go:89] found id: ""
	I0429 20:09:52.414970   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.414982   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:52.414989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:52.415056   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:52.454139   66615 cri.go:89] found id: ""
	I0429 20:09:52.454164   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.454172   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:52.454178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:52.454236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:52.494093   66615 cri.go:89] found id: ""
	I0429 20:09:52.494129   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.494142   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:52.494155   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:52.494195   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:52.552104   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:52.552142   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:52.568430   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:52.568459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:52.649708   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:52.649736   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:52.649752   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:52.746231   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:52.746272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:55.296228   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:55.311257   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:55.311328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:55.352071   66615 cri.go:89] found id: ""
	I0429 20:09:55.352098   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.352109   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:55.352116   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:55.352177   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:55.399806   66615 cri.go:89] found id: ""
	I0429 20:09:55.399837   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.399847   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:55.399860   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:55.399947   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:55.444372   66615 cri.go:89] found id: ""
	I0429 20:09:55.444398   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.444406   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:55.444411   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:55.444468   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:55.485542   66615 cri.go:89] found id: ""
	I0429 20:09:55.485568   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.485579   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:55.485586   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:55.485670   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:55.535452   66615 cri.go:89] found id: ""
	I0429 20:09:55.535483   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.535494   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:55.535502   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:55.535566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:55.578009   66615 cri.go:89] found id: ""
	I0429 20:09:55.578036   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.578048   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:55.578056   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:55.578138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:55.618302   66615 cri.go:89] found id: ""
	I0429 20:09:55.618336   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.618347   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:55.618355   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:55.618419   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:55.660489   66615 cri.go:89] found id: ""
	I0429 20:09:55.660518   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.660526   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:55.660535   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:55.660548   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:55.713953   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:55.713993   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:55.729624   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:55.729656   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:55.813718   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:55.813746   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:55.813762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:55.898805   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:55.898849   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:58.467014   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:58.482852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:58.482925   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:58.522862   66615 cri.go:89] found id: ""
	I0429 20:09:58.522896   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.522908   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:58.522916   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:58.523000   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:58.568234   66615 cri.go:89] found id: ""
	I0429 20:09:58.568259   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.568266   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:58.568272   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:58.568327   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:58.609147   66615 cri.go:89] found id: ""
	I0429 20:09:58.609175   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.609185   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:58.609192   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:58.609265   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:58.657074   66615 cri.go:89] found id: ""
	I0429 20:09:58.657104   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.657115   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:58.657122   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:58.657186   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:58.706819   66615 cri.go:89] found id: ""
	I0429 20:09:58.706846   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.706857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:58.706865   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:58.706929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:58.754967   66615 cri.go:89] found id: ""
	I0429 20:09:58.754998   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.755007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:58.755018   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:58.755078   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:58.793657   66615 cri.go:89] found id: ""
	I0429 20:09:58.793694   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.793704   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:58.793709   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:58.793766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:58.832023   66615 cri.go:89] found id: ""
	I0429 20:09:58.832055   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.832066   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:58.832078   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:58.832094   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:58.886568   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:58.886605   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:58.902126   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:58.902154   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:58.986786   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:58.986814   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:58.986831   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:59.072258   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:59.072296   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:01.620172   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:01.636958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:01.637055   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:01.703865   66615 cri.go:89] found id: ""
	I0429 20:10:01.703890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.703899   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:01.703905   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:01.703950   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:01.742655   66615 cri.go:89] found id: ""
	I0429 20:10:01.742684   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.742692   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:01.742707   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:01.742778   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:01.782866   66615 cri.go:89] found id: ""
	I0429 20:10:01.782890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.782901   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:01.782908   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:01.782964   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:01.822958   66615 cri.go:89] found id: ""
	I0429 20:10:01.822984   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.822992   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:01.822997   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:01.823044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:01.868581   66615 cri.go:89] found id: ""
	I0429 20:10:01.868604   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.868612   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:01.868622   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:01.868675   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:01.908216   66615 cri.go:89] found id: ""
	I0429 20:10:01.908241   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.908249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:01.908255   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:01.908328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:01.953100   66615 cri.go:89] found id: ""
	I0429 20:10:01.953131   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.953142   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:01.953150   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:01.953213   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:01.999940   66615 cri.go:89] found id: ""
	I0429 20:10:01.999974   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.999988   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:01.999999   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:02.000012   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:02.061669   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:02.061704   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:02.077609   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:02.077640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:02.169643   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:02.169666   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:02.169679   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:02.250615   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:02.250657   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:04.803629   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:04.819286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:04.819364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:04.860501   66615 cri.go:89] found id: ""
	I0429 20:10:04.860530   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.860541   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:04.860548   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:04.860672   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:04.898444   66615 cri.go:89] found id: ""
	I0429 20:10:04.898472   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.898480   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:04.898486   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:04.898546   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:04.936569   66615 cri.go:89] found id: ""
	I0429 20:10:04.936599   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.936609   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:04.936617   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:04.936695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:04.979667   66615 cri.go:89] found id: ""
	I0429 20:10:04.979696   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.979708   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:04.979715   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:04.979768   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:05.019608   66615 cri.go:89] found id: ""
	I0429 20:10:05.019638   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.019650   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:05.019658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:05.019724   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:05.063723   66615 cri.go:89] found id: ""
	I0429 20:10:05.063749   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.063758   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:05.063765   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:05.063821   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:05.106676   66615 cri.go:89] found id: ""
	I0429 20:10:05.106704   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.106714   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:05.106721   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:05.106783   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:05.147652   66615 cri.go:89] found id: ""
	I0429 20:10:05.147683   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.147693   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:05.147704   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:05.147721   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:05.189048   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:05.189085   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:05.248635   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:05.248669   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:05.265791   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:05.265826   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:05.343190   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:05.343217   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:05.343234   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:07.926868   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:07.942581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:07.942656   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:07.981316   66615 cri.go:89] found id: ""
	I0429 20:10:07.981349   66615 logs.go:276] 0 containers: []
	W0429 20:10:07.981361   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:07.981368   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:07.981429   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:08.024017   66615 cri.go:89] found id: ""
	I0429 20:10:08.024045   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.024056   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:08.024062   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:08.024146   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:08.075761   66615 cri.go:89] found id: ""
	I0429 20:10:08.075786   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.075798   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:08.075805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:08.075864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:08.146501   66615 cri.go:89] found id: ""
	I0429 20:10:08.146528   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.146536   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:08.146541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:08.146624   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:08.204987   66615 cri.go:89] found id: ""
	I0429 20:10:08.205013   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.205021   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:08.205027   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:08.205083   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:08.244930   66615 cri.go:89] found id: ""
	I0429 20:10:08.244959   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.244970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:08.244979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:08.245040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:08.284204   66615 cri.go:89] found id: ""
	I0429 20:10:08.284232   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.284243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:08.284250   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:08.284305   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:08.324077   66615 cri.go:89] found id: ""
	I0429 20:10:08.324102   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.324113   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:08.324123   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:08.324139   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:08.341584   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:08.341614   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:08.429808   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:08.429827   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:08.429840   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:08.509906   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:08.509942   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:08.562662   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:08.562697   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:11.121673   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:11.137328   66615 kubeadm.go:591] duration metric: took 4m4.72832668s to restartPrimaryControlPlane
	W0429 20:10:11.137411   66615 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:11.137446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:13.254357   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.116867978s)
	I0429 20:10:13.254436   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:13.275293   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:13.287073   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:13.298046   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:13.298080   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:13.298132   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:13.311790   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:13.311861   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:13.323201   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:13.334284   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:13.334357   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:13.348597   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.361993   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:13.362055   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.376185   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:13.389715   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:13.389778   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:13.403955   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:13.675887   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:12:09.853929   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:12:09.854036   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:12:09.856141   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:12:09.856215   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:12:09.856314   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:12:09.856435   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:12:09.856529   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:12:09.856638   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:12:09.858658   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:12:09.858759   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:12:09.858821   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:12:09.858914   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:12:09.858967   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:12:09.859049   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:12:09.859118   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:12:09.859197   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:12:09.859311   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:12:09.859435   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:12:09.859548   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:12:09.859605   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:12:09.859678   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:12:09.859766   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:12:09.859856   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:12:09.859947   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:12:09.860025   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:12:09.860149   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:12:09.860228   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:12:09.860289   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:12:09.860390   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:12:09.862098   66615 out.go:204]   - Booting up control plane ...
	I0429 20:12:09.862211   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:12:09.862298   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:12:09.862360   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:12:09.862484   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:12:09.862720   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:12:09.862794   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:12:09.862882   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863117   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863244   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863470   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863544   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863814   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863895   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864144   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864223   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864393   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864408   66615 kubeadm.go:309] 
	I0429 20:12:09.864473   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:12:09.864526   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:12:09.864543   66615 kubeadm.go:309] 
	I0429 20:12:09.864589   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:12:09.864638   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:12:09.864779   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:12:09.864789   66615 kubeadm.go:309] 
	I0429 20:12:09.864911   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:12:09.864971   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:12:09.865026   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:12:09.865033   66615 kubeadm.go:309] 
	I0429 20:12:09.865150   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:12:09.865228   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:12:09.865241   66615 kubeadm.go:309] 
	I0429 20:12:09.865404   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:12:09.865538   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:12:09.865651   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:12:09.865755   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:12:09.865828   66615 kubeadm.go:309] 
	W0429 20:12:09.865940   66615 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0429 20:12:09.866027   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:12:10.987703   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.121642991s)
	I0429 20:12:10.987802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:11.007295   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:12:11.020772   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:12:11.020790   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:12:11.020838   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:12:11.033334   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:12:11.033405   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:12:11.044565   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:12:11.057087   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:12:11.057143   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:12:11.069908   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.082866   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:12:11.082920   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.096659   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:12:11.110106   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:12:11.110166   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:12:11.124952   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:12:11.396252   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:14:07.831448   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:14:07.831556   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:14:07.833111   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:14:07.833179   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:14:07.833288   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:14:07.833421   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:14:07.833530   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:14:07.833616   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:14:07.835518   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:14:07.835623   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:14:07.835703   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:14:07.835776   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:14:07.835839   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:14:07.835893   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:14:07.835957   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:14:07.836039   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:14:07.836129   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:14:07.836238   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:14:07.836350   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:14:07.836394   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:14:07.836441   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:14:07.836488   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:14:07.836559   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:14:07.836637   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:14:07.836683   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:14:07.836778   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:14:07.836854   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:14:07.836895   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:14:07.836950   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:14:07.838553   66615 out.go:204]   - Booting up control plane ...
	I0429 20:14:07.838635   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:14:07.838718   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:14:07.838836   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:14:07.838918   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:14:07.839069   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:14:07.839126   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:14:07.839180   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839369   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839450   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839654   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839779   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840008   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840076   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840322   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840380   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840571   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840594   66615 kubeadm.go:309] 
	I0429 20:14:07.840637   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:14:07.840673   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:14:07.840682   66615 kubeadm.go:309] 
	I0429 20:14:07.840715   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:14:07.840745   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:14:07.840844   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:14:07.840857   66615 kubeadm.go:309] 
	I0429 20:14:07.840969   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:14:07.841022   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:14:07.841073   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:14:07.841083   66615 kubeadm.go:309] 
	I0429 20:14:07.841184   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:14:07.841315   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:14:07.841325   66615 kubeadm.go:309] 
	I0429 20:14:07.841454   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:14:07.841550   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:14:07.841632   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:14:07.841697   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:14:07.841760   66615 kubeadm.go:393] duration metric: took 8m1.501853767s to StartCluster
	I0429 20:14:07.841781   66615 kubeadm.go:309] 
	I0429 20:14:07.841800   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:14:07.841853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:14:07.898194   66615 cri.go:89] found id: ""
	I0429 20:14:07.898227   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.898237   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:14:07.898244   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:14:07.898316   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:14:07.938873   66615 cri.go:89] found id: ""
	I0429 20:14:07.938903   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.938914   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:14:07.938921   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:14:07.938979   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:14:07.980523   66615 cri.go:89] found id: ""
	I0429 20:14:07.980551   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.980559   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:14:07.980565   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:14:07.980612   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:14:08.021334   66615 cri.go:89] found id: ""
	I0429 20:14:08.021366   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.021377   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:14:08.021389   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:14:08.021446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:14:08.060598   66615 cri.go:89] found id: ""
	I0429 20:14:08.060636   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.060648   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:14:08.060655   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:14:08.060716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:14:08.101689   66615 cri.go:89] found id: ""
	I0429 20:14:08.101715   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.101723   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:14:08.101729   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:14:08.101786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:14:08.143295   66615 cri.go:89] found id: ""
	I0429 20:14:08.143333   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.143344   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:14:08.143351   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:14:08.143408   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:14:08.190555   66615 cri.go:89] found id: ""
	I0429 20:14:08.190585   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.190597   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:14:08.190609   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:14:08.190624   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:14:08.251830   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:14:08.251870   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:14:08.306512   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:14:08.306554   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:14:08.323258   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:14:08.323283   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:14:08.405539   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:14:08.405568   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:14:08.405583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0429 20:14:08.514288   66615 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0429 20:14:08.514344   66615 out.go:239] * 
	* 
	W0429 20:14:08.514431   66615 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.514465   66615 out.go:239] * 
	* 
	W0429 20:14:08.515399   66615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:14:08.518578   66615 out.go:177] 
	W0429 20:14:08.519725   66615 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.519782   66615 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0429 20:14:08.519816   66615 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0429 20:14:08.521068   66615 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-919612 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 2 (251.10236ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-919612 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-919612 logs -n 25: (1.61524058s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:55 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| ssh     | cert-options-437743 ssh                                | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-437743 -- sudo                         | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-437743                                 | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-161370            | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-456788             | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-193781 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | disable-driver-mounts-193781                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 20:00 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-866143  | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-161370                 | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-919612        | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-456788                  | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:01 UTC | 29 Apr 24 20:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-919612             | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-866143       | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:10 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:02:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:02:45.502823   66875 out.go:291] Setting OutFile to fd 1 ...
	I0429 20:02:45.503073   66875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:45.503084   66875 out.go:304] Setting ErrFile to fd 2...
	I0429 20:02:45.503089   66875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:45.503272   66875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 20:02:45.503808   66875 out.go:298] Setting JSON to false
	I0429 20:02:45.504681   66875 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6263,"bootTime":1714414702,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 20:02:45.504736   66875 start.go:139] virtualization: kvm guest
	I0429 20:02:45.507344   66875 out.go:177] * [default-k8s-diff-port-866143] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 20:02:45.508715   66875 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:02:45.508745   66875 notify.go:220] Checking for updates...
	I0429 20:02:45.510093   66875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:02:45.512200   66875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:02:45.513622   66875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:02:45.514915   66875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 20:02:45.516228   66875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:02:45.517923   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:02:45.518366   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:45.518446   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:45.533484   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I0429 20:02:45.533901   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:45.534427   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:02:45.534448   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:45.534822   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:45.535013   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:02:45.535292   66875 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:02:45.535595   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:45.535639   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:45.551065   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0429 20:02:45.551469   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:45.551906   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:02:45.551928   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:45.552239   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:45.552451   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:02:45.584714   66875 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 20:02:45.586089   66875 start.go:297] selected driver: kvm2
	I0429 20:02:45.586117   66875 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:45.586250   66875 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:02:45.587043   66875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:45.587136   66875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 20:02:45.601799   66875 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 20:02:45.602171   66875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:02:45.602246   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:02:45.602265   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:02:45.602323   66875 start.go:340] cluster config:
	{Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:45.602444   66875 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:45.605081   66875 out.go:177] * Starting "default-k8s-diff-port-866143" primary control-plane node in "default-k8s-diff-port-866143" cluster
	I0429 20:02:42.794291   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:45.866333   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:45.606536   66875 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:02:45.606590   66875 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 20:02:45.606602   66875 cache.go:56] Caching tarball of preloaded images
	I0429 20:02:45.606687   66875 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 20:02:45.606704   66875 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 20:02:45.606799   66875 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:02:45.606986   66875 start.go:360] acquireMachinesLock for default-k8s-diff-port-866143: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:02:51.946332   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:55.018269   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:01.098329   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:04.170389   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:10.250316   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:13.322292   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:19.402290   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:22.474356   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:28.554348   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:31.626416   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:37.706282   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:40.778321   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:46.858318   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:49.930321   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:56.010331   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:59.082336   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:05.162299   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:08.234328   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:14.314352   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:17.386337   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:23.466350   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:26.538284   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:32.618297   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:35.690319   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:41.770372   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:44.842280   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:50.922320   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:53.994336   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:00.074389   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:03.146353   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:09.226369   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:12.298407   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:15.302828   66218 start.go:364] duration metric: took 4m7.483402316s to acquireMachinesLock for "no-preload-456788"
	I0429 20:05:15.302889   66218 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:15.302896   66218 fix.go:54] fixHost starting: 
	I0429 20:05:15.303301   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:15.303337   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:15.319582   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I0429 20:05:15.320057   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:15.320597   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:05:15.320620   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:15.321017   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:15.321272   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:15.321472   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:05:15.323137   66218 fix.go:112] recreateIfNeeded on no-preload-456788: state=Stopped err=<nil>
	I0429 20:05:15.323171   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	W0429 20:05:15.323346   66218 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:15.325520   66218 out.go:177] * Restarting existing kvm2 VM for "no-preload-456788" ...
	I0429 20:05:15.327122   66218 main.go:141] libmachine: (no-preload-456788) Calling .Start
	I0429 20:05:15.327314   66218 main.go:141] libmachine: (no-preload-456788) Ensuring networks are active...
	I0429 20:05:15.328136   66218 main.go:141] libmachine: (no-preload-456788) Ensuring network default is active
	I0429 20:05:15.328437   66218 main.go:141] libmachine: (no-preload-456788) Ensuring network mk-no-preload-456788 is active
	I0429 20:05:15.328771   66218 main.go:141] libmachine: (no-preload-456788) Getting domain xml...
	I0429 20:05:15.329442   66218 main.go:141] libmachine: (no-preload-456788) Creating domain...
	I0429 20:05:16.534970   66218 main.go:141] libmachine: (no-preload-456788) Waiting to get IP...
	I0429 20:05:16.536019   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:16.536375   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:16.536444   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:16.536369   67416 retry.go:31] will retry after 240.743093ms: waiting for machine to come up
	I0429 20:05:16.779123   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:16.779623   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:16.779659   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:16.779558   67416 retry.go:31] will retry after 355.595109ms: waiting for machine to come up
	I0429 20:05:17.137145   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:17.137512   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:17.137542   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:17.137480   67416 retry.go:31] will retry after 347.905643ms: waiting for machine to come up
	I0429 20:05:17.487174   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:17.487566   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:17.487597   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:17.487543   67416 retry.go:31] will retry after 547.016094ms: waiting for machine to come up
	I0429 20:05:15.300221   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:15.300278   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:05:15.300613   65980 buildroot.go:166] provisioning hostname "embed-certs-161370"
	I0429 20:05:15.300652   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:05:15.300910   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:05:15.302677   65980 machine.go:97] duration metric: took 4m37.41104152s to provisionDockerMachine
	I0429 20:05:15.302722   65980 fix.go:56] duration metric: took 4m37.432092484s for fixHost
	I0429 20:05:15.302728   65980 start.go:83] releasing machines lock for "embed-certs-161370", held for 4m37.432113341s
	W0429 20:05:15.302753   65980 start.go:713] error starting host: provision: host is not running
	W0429 20:05:15.302871   65980 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0429 20:05:15.302882   65980 start.go:728] Will try again in 5 seconds ...
	I0429 20:05:18.036617   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:18.037042   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:18.037104   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:18.037025   67416 retry.go:31] will retry after 465.100134ms: waiting for machine to come up
	I0429 20:05:18.503846   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:18.504326   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:18.504352   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:18.504283   67416 retry.go:31] will retry after 672.007195ms: waiting for machine to come up
	I0429 20:05:19.178173   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:19.178570   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:19.178604   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:19.178516   67416 retry.go:31] will retry after 744.052058ms: waiting for machine to come up
	I0429 20:05:19.924561   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:19.925029   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:19.925060   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:19.925002   67416 retry.go:31] will retry after 1.06511003s: waiting for machine to come up
	I0429 20:05:20.991584   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:20.992015   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:20.992046   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:20.991980   67416 retry.go:31] will retry after 1.677065765s: waiting for machine to come up
	I0429 20:05:22.671760   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:22.672123   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:22.672149   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:22.672085   67416 retry.go:31] will retry after 1.979191189s: waiting for machine to come up
	I0429 20:05:20.303964   65980 start.go:360] acquireMachinesLock for embed-certs-161370: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:05:24.654246   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:24.654711   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:24.654735   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:24.654663   67416 retry.go:31] will retry after 1.839551716s: waiting for machine to come up
	I0429 20:05:26.496511   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:26.496982   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:26.497017   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:26.496939   67416 retry.go:31] will retry after 3.505979368s: waiting for machine to come up
	I0429 20:05:30.006590   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:30.006916   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:30.006951   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:30.006871   67416 retry.go:31] will retry after 3.811785899s: waiting for machine to come up
	I0429 20:05:35.155600   66615 start.go:364] duration metric: took 3m25.093405289s to acquireMachinesLock for "old-k8s-version-919612"
	I0429 20:05:35.155655   66615 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:35.155661   66615 fix.go:54] fixHost starting: 
	I0429 20:05:35.155999   66615 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:35.156034   66615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:35.173332   66615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0429 20:05:35.173754   66615 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:35.174261   66615 main.go:141] libmachine: Using API Version  1
	I0429 20:05:35.174294   66615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:35.174602   66615 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:35.174797   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:35.174987   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetState
	I0429 20:05:35.176453   66615 fix.go:112] recreateIfNeeded on old-k8s-version-919612: state=Stopped err=<nil>
	I0429 20:05:35.176478   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	W0429 20:05:35.176647   66615 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:35.178966   66615 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-919612" ...
	I0429 20:05:33.823293   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.823787   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has current primary IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.823806   66218 main.go:141] libmachine: (no-preload-456788) Found IP for machine: 192.168.39.235
	I0429 20:05:33.823830   66218 main.go:141] libmachine: (no-preload-456788) Reserving static IP address...
	I0429 20:05:33.824243   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "no-preload-456788", mac: "52:54:00:15:ae:18", ip: "192.168.39.235"} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.824279   66218 main.go:141] libmachine: (no-preload-456788) DBG | skip adding static IP to network mk-no-preload-456788 - found existing host DHCP lease matching {name: "no-preload-456788", mac: "52:54:00:15:ae:18", ip: "192.168.39.235"}
	I0429 20:05:33.824293   66218 main.go:141] libmachine: (no-preload-456788) Reserved static IP address: 192.168.39.235
	I0429 20:05:33.824308   66218 main.go:141] libmachine: (no-preload-456788) Waiting for SSH to be available...
	I0429 20:05:33.824323   66218 main.go:141] libmachine: (no-preload-456788) DBG | Getting to WaitForSSH function...
	I0429 20:05:33.826371   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.826678   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.826711   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.826808   66218 main.go:141] libmachine: (no-preload-456788) DBG | Using SSH client type: external
	I0429 20:05:33.826836   66218 main.go:141] libmachine: (no-preload-456788) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa (-rw-------)
	I0429 20:05:33.826863   66218 main.go:141] libmachine: (no-preload-456788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:33.826876   66218 main.go:141] libmachine: (no-preload-456788) DBG | About to run SSH command:
	I0429 20:05:33.826887   66218 main.go:141] libmachine: (no-preload-456788) DBG | exit 0
	I0429 20:05:33.954275   66218 main.go:141] libmachine: (no-preload-456788) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:33.954631   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetConfigRaw
	I0429 20:05:33.955387   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:33.957827   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.958210   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.958241   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.958510   66218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/config.json ...
	I0429 20:05:33.958707   66218 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:33.958726   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:33.958952   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:33.961236   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.961535   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.961564   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.961692   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:33.961857   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:33.962015   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:33.962163   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:33.962339   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:33.962522   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:33.962533   66218 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:34.070746   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:34.070777   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.071037   66218 buildroot.go:166] provisioning hostname "no-preload-456788"
	I0429 20:05:34.071062   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.071305   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.073680   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.074016   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.074043   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.074203   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.074374   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.074513   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.074612   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.074743   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.074946   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.074960   66218 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-456788 && echo "no-preload-456788" | sudo tee /etc/hostname
	I0429 20:05:34.198256   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-456788
	
	I0429 20:05:34.198286   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.201126   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.201482   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.201521   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.201710   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.201914   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.202055   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.202219   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.202361   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.202549   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.202573   66218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-456788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-456788/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-456788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:34.324678   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:34.324710   66218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:34.324732   66218 buildroot.go:174] setting up certificates
	I0429 20:05:34.324744   66218 provision.go:84] configureAuth start
	I0429 20:05:34.324756   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.325032   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:34.327623   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.328010   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.328040   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.328149   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.330359   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.330679   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.330711   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.330811   66218 provision.go:143] copyHostCerts
	I0429 20:05:34.330865   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:34.330878   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:34.330939   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:34.331023   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:34.331031   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:34.331054   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:34.331111   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:34.331119   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:34.331148   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:34.331231   66218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.no-preload-456788 san=[127.0.0.1 192.168.39.235 localhost minikube no-preload-456788]
	I0429 20:05:34.444358   66218 provision.go:177] copyRemoteCerts
	I0429 20:05:34.444420   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:34.444445   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.447129   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.447432   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.447466   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.447623   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.447833   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.447999   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.448129   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:34.533465   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:34.561724   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:34.589229   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 20:05:34.617451   66218 provision.go:87] duration metric: took 292.691614ms to configureAuth
	I0429 20:05:34.617491   66218 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:34.617733   66218 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:05:34.617821   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.620628   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.621016   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.621047   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.621257   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.621532   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.621718   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.621892   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.622085   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.622289   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.622305   66218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:34.908031   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:34.908064   66218 machine.go:97] duration metric: took 949.343369ms to provisionDockerMachine
	I0429 20:05:34.908077   66218 start.go:293] postStartSetup for "no-preload-456788" (driver="kvm2")
	I0429 20:05:34.908091   66218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:34.908107   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:34.908452   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:34.908489   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.911574   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.912026   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.912054   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.912219   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.912428   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.912616   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.912743   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:34.997625   66218 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:35.002661   66218 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:35.002687   66218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:35.002753   66218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:35.002822   66218 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:35.002906   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:35.013292   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:35.039830   66218 start.go:296] duration metric: took 131.741312ms for postStartSetup
	I0429 20:05:35.039865   66218 fix.go:56] duration metric: took 19.736969384s for fixHost
	I0429 20:05:35.039905   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.042526   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.042877   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.042912   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.043032   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.043239   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.043416   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.043534   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.043696   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:35.043848   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:35.043858   66218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:05:35.155463   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421135.123583649
	
	I0429 20:05:35.155485   66218 fix.go:216] guest clock: 1714421135.123583649
	I0429 20:05:35.155496   66218 fix.go:229] Guest: 2024-04-29 20:05:35.123583649 +0000 UTC Remote: 2024-04-29 20:05:35.039869068 +0000 UTC m=+267.371683880 (delta=83.714581ms)
	I0429 20:05:35.155514   66218 fix.go:200] guest clock delta is within tolerance: 83.714581ms
	I0429 20:05:35.155519   66218 start.go:83] releasing machines lock for "no-preload-456788", held for 19.852645936s
	I0429 20:05:35.155544   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.155881   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:35.158682   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.159051   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.159070   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.159205   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.159793   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.159987   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.160077   66218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:35.160117   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.160216   66218 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:35.160244   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.162788   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163016   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163226   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.163250   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163372   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.163449   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.163475   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163537   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.163621   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.163723   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.163791   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.163873   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:35.163920   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.164064   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:35.248518   66218 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:35.271479   66218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:35.423324   66218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:35.430371   66218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:35.430445   66218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:35.447860   66218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:35.447886   66218 start.go:494] detecting cgroup driver to use...
	I0429 20:05:35.447949   66218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:35.464102   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:35.479069   66218 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:35.479158   66218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:35.493800   66218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:35.509284   66218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:35.627273   66218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:35.785213   66218 docker.go:233] disabling docker service ...
	I0429 20:05:35.785300   66218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:35.803584   66218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:35.818874   66218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:35.984309   66218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:36.128841   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:36.148237   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:36.172144   66218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:05:36.172243   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.191274   66218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:36.191353   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.209656   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.224474   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.238802   66218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:36.252515   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.264522   66218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.286496   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.299127   66218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:36.310702   66218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:36.310760   66218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:36.336226   66218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:36.348617   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:36.474875   66218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:36.619181   66218 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:36.619257   66218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:36.625401   66218 start.go:562] Will wait 60s for crictl version
	I0429 20:05:36.625475   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:36.630232   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:36.667005   66218 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:36.667093   66218 ssh_runner.go:195] Run: crio --version
	I0429 20:05:36.699758   66218 ssh_runner.go:195] Run: crio --version
	I0429 20:05:36.734406   66218 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:05:36.735853   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:36.738683   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:36.739019   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:36.739049   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:36.739310   66218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:36.745227   66218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:36.760124   66218 kubeadm.go:877] updating cluster {Name:no-preload-456788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:36.760238   66218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:05:36.760278   66218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:36.801389   66218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:05:36.801414   66218 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:05:36.801470   66218 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:36.801508   66218 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:36.801524   66218 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:36.801559   66218 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:36.801580   66218 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.801632   66218 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0429 20:05:36.801687   66218 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:36.801688   66218 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:36.803301   66218 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:36.803300   66218 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.803308   66218 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:36.803382   66218 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:36.956976   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.964957   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.022376   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.025860   66218 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0429 20:05:37.025893   66218 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0429 20:05:37.025915   66218 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.025924   66218 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:37.025962   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.025964   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.072629   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.072688   66218 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0429 20:05:37.072713   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:37.072741   66218 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.072791   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.118610   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0429 20:05:37.118704   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.118720   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.128364   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0429 20:05:37.128474   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:37.161350   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0429 20:05:37.165670   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0429 20:05:37.165693   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0429 20:05:37.165710   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.165754   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.165762   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0429 20:05:37.165779   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:37.167440   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:37.174173   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:37.180560   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:37.715733   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:35.180393   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .Start
	I0429 20:05:35.180576   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring networks are active...
	I0429 20:05:35.181281   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network default is active
	I0429 20:05:35.181678   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network mk-old-k8s-version-919612 is active
	I0429 20:05:35.182102   66615 main.go:141] libmachine: (old-k8s-version-919612) Getting domain xml...
	I0429 20:05:35.182867   66615 main.go:141] libmachine: (old-k8s-version-919612) Creating domain...
	I0429 20:05:36.459478   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting to get IP...
	I0429 20:05:36.460301   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.460751   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.460817   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.460706   67552 retry.go:31] will retry after 280.48781ms: waiting for machine to come up
	I0429 20:05:36.743188   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.743630   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.743658   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.743591   67552 retry.go:31] will retry after 326.238132ms: waiting for machine to come up
	I0429 20:05:37.071146   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.071576   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.071609   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.071527   67552 retry.go:31] will retry after 380.72234ms: waiting for machine to come up
	I0429 20:05:37.453967   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.454435   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.454464   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.454385   67552 retry.go:31] will retry after 593.303053ms: waiting for machine to come up
	I0429 20:05:38.049072   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.049555   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.049587   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.049500   67552 retry.go:31] will retry after 694.752524ms: waiting for machine to come up
	I0429 20:05:38.746542   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.747034   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.747065   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.747002   67552 retry.go:31] will retry after 860.161186ms: waiting for machine to come up
	I0429 20:05:39.609098   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:39.609601   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:39.609634   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:39.609544   67552 retry.go:31] will retry after 726.889681ms: waiting for machine to come up
	I0429 20:05:39.327634   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.161845487s)
	I0429 20:05:39.327673   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.161870572s)
	I0429 20:05:39.327710   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0429 20:05:39.327675   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0429 20:05:39.327737   66218 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:39.327748   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0: (2.16027023s)
	I0429 20:05:39.327805   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:39.327811   66218 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0429 20:05:39.327821   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0: (2.153617598s)
	I0429 20:05:39.327846   66218 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:39.327878   66218 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0429 20:05:39.327891   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (2.147303278s)
	I0429 20:05:39.327910   66218 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:39.327929   66218 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0429 20:05:39.327944   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327954   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.612190652s)
	I0429 20:05:39.327960   66218 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:39.327984   66218 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0429 20:05:39.328035   66218 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:39.328061   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327991   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327886   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.333555   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:39.343257   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:41.263038   66218 ssh_runner.go:235] Completed: which crictl: (1.934889703s)
	I0429 20:05:41.263103   66218 ssh_runner.go:235] Completed: which crictl: (1.93491368s)
	I0429 20:05:41.263121   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:41.263132   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.935299869s)
	I0429 20:05:41.263153   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0: (1.929577799s)
	I0429 20:05:41.263155   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0429 20:05:41.263217   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.919934007s)
	I0429 20:05:41.263221   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0429 20:05:41.263248   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:41.263251   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 20:05:41.263290   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:41.263301   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:41.263343   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:41.263159   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:40.338292   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:40.338823   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:40.338864   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:40.338757   67552 retry.go:31] will retry after 1.310400969s: waiting for machine to come up
	I0429 20:05:41.651107   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:41.651625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:41.651670   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:41.651575   67552 retry.go:31] will retry after 1.769756679s: waiting for machine to come up
	I0429 20:05:43.423326   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:43.423829   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:43.423869   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:43.423790   67552 retry.go:31] will retry after 1.748237944s: waiting for machine to come up
	I0429 20:05:44.084051   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.820737476s)
	I0429 20:05:44.084139   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.820774517s)
	I0429 20:05:44.084167   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.820842646s)
	I0429 20:05:44.084186   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0429 20:05:44.084142   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0429 20:05:44.084202   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0429 20:05:44.084211   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:44.084065   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0: (2.820919138s)
	I0429 20:05:44.084244   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0429 20:05:44.084260   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:44.084272   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (2.82086612s)
	I0429 20:05:44.084305   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0429 20:05:44.084331   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:44.084375   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:44.091151   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0429 20:05:46.553783   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.469493694s)
	I0429 20:05:46.553882   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0429 20:05:46.553912   66218 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:46.553837   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.469479626s)
	I0429 20:05:46.553973   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:46.553975   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0429 20:05:47.510118   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0429 20:05:47.510169   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:47.510212   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:45.173157   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:45.173617   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:45.173642   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:45.173563   67552 retry.go:31] will retry after 2.784243469s: waiting for machine to come up
	I0429 20:05:47.959942   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:47.960473   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:47.960508   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:47.960410   67552 retry.go:31] will retry after 3.046526969s: waiting for machine to come up
	I0429 20:05:49.069163   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.55892426s)
	I0429 20:05:49.069202   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0429 20:05:49.069231   66218 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:49.069276   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:51.007941   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:51.008230   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:51.008253   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:51.008213   67552 retry.go:31] will retry after 4.220985004s: waiting for machine to come up
	I0429 20:05:56.579154   66875 start.go:364] duration metric: took 3m10.972135355s to acquireMachinesLock for "default-k8s-diff-port-866143"
	I0429 20:05:56.579208   66875 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:56.579230   66875 fix.go:54] fixHost starting: 
	I0429 20:05:56.579615   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:56.579655   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:56.599113   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0429 20:05:56.599627   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:56.600173   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:05:56.600198   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:56.600488   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:56.600694   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:05:56.600849   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:05:56.602291   66875 fix.go:112] recreateIfNeeded on default-k8s-diff-port-866143: state=Stopped err=<nil>
	I0429 20:05:56.602315   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	W0429 20:05:56.602456   66875 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:56.605006   66875 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-866143" ...
	I0429 20:05:53.062693   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.993382111s)
	I0429 20:05:53.062730   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0429 20:05:53.062757   66218 cache_images.go:123] Successfully loaded all cached images
	I0429 20:05:53.062762   66218 cache_images.go:92] duration metric: took 16.261337424s to LoadCachedImages
	I0429 20:05:53.062770   66218 kubeadm.go:928] updating node { 192.168.39.235 8443 v1.30.0 crio true true} ...
	I0429 20:05:53.062893   66218 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-456788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:05:53.062994   66218 ssh_runner.go:195] Run: crio config
	I0429 20:05:53.116289   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:05:53.116311   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:05:53.116322   66218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:05:53.116340   66218 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.235 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-456788 NodeName:no-preload-456788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:05:53.116516   66218 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-456788"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:05:53.116592   66218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:05:53.128095   66218 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:05:53.128174   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:05:53.138786   66218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0429 20:05:53.158151   66218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:05:53.176440   66218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:05:53.195348   66218 ssh_runner.go:195] Run: grep 192.168.39.235	control-plane.minikube.internal$ /etc/hosts
	I0429 20:05:53.199408   66218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:53.212407   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:53.349752   66218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:05:53.368381   66218 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788 for IP: 192.168.39.235
	I0429 20:05:53.368401   66218 certs.go:194] generating shared ca certs ...
	I0429 20:05:53.368415   66218 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:05:53.368565   66218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:05:53.368609   66218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:05:53.368619   66218 certs.go:256] generating profile certs ...
	I0429 20:05:53.368697   66218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.key
	I0429 20:05:53.368751   66218 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.key.5f45c78c
	I0429 20:05:53.368785   66218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.key
	I0429 20:05:53.368889   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:05:53.368915   66218 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:05:53.368921   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:05:53.368944   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:05:53.368972   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:05:53.368993   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:05:53.369029   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:53.369624   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:05:53.428403   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:05:53.467050   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:05:53.501319   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:05:53.528828   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:05:53.553742   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:05:53.582308   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:05:53.609324   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:05:53.636730   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:05:53.663388   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:05:53.690949   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:05:53.717113   66218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:05:53.735784   66218 ssh_runner.go:195] Run: openssl version
	I0429 20:05:53.741879   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:05:53.752930   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.757811   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.757861   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.763798   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:05:53.775019   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:05:53.786654   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.791457   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.791500   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.797608   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:05:53.809139   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:05:53.820927   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.826384   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.826441   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.832798   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:05:53.844300   66218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:05:53.849139   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:05:53.855556   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:05:53.861716   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:05:53.868390   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:05:53.874740   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:05:53.881101   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:05:53.887688   66218 kubeadm.go:391] StartCluster: {Name:no-preload-456788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:05:53.887807   66218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:05:53.887858   66218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:05:53.930491   66218 cri.go:89] found id: ""
	I0429 20:05:53.930563   66218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:05:53.941016   66218 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:05:53.941037   66218 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:05:53.941042   66218 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:05:53.941081   66218 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:05:53.950651   66218 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:05:53.951536   66218 kubeconfig.go:125] found "no-preload-456788" server: "https://192.168.39.235:8443"
	I0429 20:05:53.953451   66218 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:05:53.962857   66218 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.235
	I0429 20:05:53.962879   66218 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:05:53.962889   66218 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:05:53.962932   66218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:05:54.000841   66218 cri.go:89] found id: ""
	I0429 20:05:54.000909   66218 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:05:54.018221   66218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:05:54.028524   66218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:05:54.028556   66218 kubeadm.go:156] found existing configuration files:
	
	I0429 20:05:54.028600   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:05:54.038717   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:05:54.038807   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:05:54.049350   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:05:54.059483   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:05:54.059548   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:05:54.069518   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:05:54.078900   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:05:54.078953   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:05:54.088652   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:05:54.098545   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:05:54.098596   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:05:54.108351   66218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:05:54.118645   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:54.236330   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:55.859211   66218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.622843221s)
	I0429 20:05:55.859254   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.075993   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.175176   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.274249   66218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:05:56.274469   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:56.775315   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:57.274840   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:57.315656   66218 api_server.go:72] duration metric: took 1.041421989s to wait for apiserver process to appear ...
	I0429 20:05:57.315697   66218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:05:57.315719   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:05:57.316669   66218 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": dial tcp 192.168.39.235:8443: connect: connection refused
	I0429 20:05:55.230409   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230860   66615 main.go:141] libmachine: (old-k8s-version-919612) Found IP for machine: 192.168.72.240
	I0429 20:05:55.230889   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has current primary IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230898   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserving static IP address...
	I0429 20:05:55.231252   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.231287   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | skip adding static IP to network mk-old-k8s-version-919612 - found existing host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"}
	I0429 20:05:55.231305   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserved static IP address: 192.168.72.240
	I0429 20:05:55.231319   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting for SSH to be available...
	I0429 20:05:55.231335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Getting to WaitForSSH function...
	I0429 20:05:55.233198   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.233500   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH client type: external
	I0429 20:05:55.233671   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa (-rw-------)
	I0429 20:05:55.233706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:55.233730   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | About to run SSH command:
	I0429 20:05:55.233747   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | exit 0
	I0429 20:05:55.354242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:55.354584   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetConfigRaw
	I0429 20:05:55.355221   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.357791   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.358276   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358564   66615 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json ...
	I0429 20:05:55.358786   66615 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:55.358807   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:55.359037   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.361536   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.361861   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.361885   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.362048   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.362247   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362416   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362568   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.362733   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.362930   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.362943   66615 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:55.462364   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:55.462388   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462632   66615 buildroot.go:166] provisioning hostname "old-k8s-version-919612"
	I0429 20:05:55.462669   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462852   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.465335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465674   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.465706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465836   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.466034   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466208   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466366   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.466525   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.466729   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.466745   66615 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-919612 && echo "old-k8s-version-919612" | sudo tee /etc/hostname
	I0429 20:05:55.596239   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-919612
	
	I0429 20:05:55.596281   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.599221   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599575   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.599606   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599770   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.599970   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600122   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600316   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.600498   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.600667   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.600690   66615 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-919612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-919612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-919612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:55.716588   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:55.716621   66615 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:55.716647   66615 buildroot.go:174] setting up certificates
	I0429 20:05:55.716658   66615 provision.go:84] configureAuth start
	I0429 20:05:55.716671   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.716956   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.719569   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.719919   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.719956   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.720095   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.722484   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.722876   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.722912   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.723036   66615 provision.go:143] copyHostCerts
	I0429 20:05:55.723087   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:55.723097   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:55.723158   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:55.723253   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:55.723262   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:55.723280   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:55.723336   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:55.723342   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:55.723358   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:55.723404   66615 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-919612 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-919612]
	I0429 20:05:55.878639   66615 provision.go:177] copyRemoteCerts
	I0429 20:05:55.878724   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:55.878750   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.881746   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882306   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.882358   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882540   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.882743   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.882986   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.883139   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:55.973158   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:56.003094   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0429 20:05:56.031670   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:56.059049   66615 provision.go:87] duration metric: took 342.376371ms to configureAuth
	I0429 20:05:56.059091   66615 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:56.059335   66615 config.go:182] Loaded profile config "old-k8s-version-919612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 20:05:56.059441   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.062416   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.062887   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.062921   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.063082   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.063322   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063521   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063688   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.063901   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.064066   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.064082   66615 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:56.342484   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:56.342511   66615 machine.go:97] duration metric: took 983.711183ms to provisionDockerMachine
	I0429 20:05:56.342525   66615 start.go:293] postStartSetup for "old-k8s-version-919612" (driver="kvm2")
	I0429 20:05:56.342540   66615 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:56.342589   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.342931   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:56.342983   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.345399   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345710   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.345731   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345869   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.346047   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.346233   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.346418   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.431189   66615 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:56.435878   66615 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:56.435903   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:56.435983   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:56.436086   66615 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:56.436170   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:56.445841   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:56.472683   66615 start.go:296] duration metric: took 130.146591ms for postStartSetup
	I0429 20:05:56.472715   66615 fix.go:56] duration metric: took 21.31705375s for fixHost
	I0429 20:05:56.472736   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.475127   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.475492   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475624   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.475857   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476055   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476211   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.476378   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.476536   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.476547   66615 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:05:56.578999   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421156.548872445
	
	I0429 20:05:56.579028   66615 fix.go:216] guest clock: 1714421156.548872445
	I0429 20:05:56.579040   66615 fix.go:229] Guest: 2024-04-29 20:05:56.548872445 +0000 UTC Remote: 2024-04-29 20:05:56.472718546 +0000 UTC m=+226.572342220 (delta=76.153899ms)
	I0429 20:05:56.579068   66615 fix.go:200] guest clock delta is within tolerance: 76.153899ms
	I0429 20:05:56.579076   66615 start.go:83] releasing machines lock for "old-k8s-version-919612", held for 21.423436193s
	I0429 20:05:56.579111   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.579407   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:56.582338   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582673   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.582711   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582856   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583365   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583543   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583625   66615 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:56.583667   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.583765   66615 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:56.583805   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.586263   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586552   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586618   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586656   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586891   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.586953   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586989   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.587060   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587170   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.587240   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587310   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587458   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587462   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.587600   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.672678   66615 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:56.694175   66615 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:56.859009   66615 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:56.865723   66615 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:56.865798   66615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:56.885686   66615 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:56.885714   66615 start.go:494] detecting cgroup driver to use...
	I0429 20:05:56.885805   66615 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:56.909082   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:56.931583   66615 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:56.931646   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:56.953524   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:56.976170   66615 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:57.122813   66615 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:57.315725   66615 docker.go:233] disabling docker service ...
	I0429 20:05:57.315786   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:57.333927   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:57.350022   66615 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:57.525787   66615 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:57.685802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:57.703246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:57.730558   66615 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 20:05:57.730618   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.747081   66615 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:57.747133   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.760168   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.773553   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.787609   66615 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:57.800532   66615 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:57.813582   66615 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:57.813669   66615 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:57.832224   66615 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:57.844783   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:57.991666   66615 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:58.183635   66615 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:58.183718   66615 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:58.189441   66615 start.go:562] Will wait 60s for crictl version
	I0429 20:05:58.189509   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:05:58.194049   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:58.250751   66615 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:58.250839   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.292368   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.336121   66615 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0429 20:05:58.337389   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:58.340707   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341125   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:58.341153   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341387   66615 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:58.346434   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:58.361081   66615 kubeadm.go:877] updating cluster {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:58.361242   66615 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 20:05:58.361307   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:58.414304   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:05:58.414366   66615 ssh_runner.go:195] Run: which lz4
	I0429 20:05:58.420584   66615 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:05:58.425682   66615 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:05:58.425712   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0429 20:05:56.606748   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Start
	I0429 20:05:56.606929   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring networks are active...
	I0429 20:05:56.607627   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring network default is active
	I0429 20:05:56.608028   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring network mk-default-k8s-diff-port-866143 is active
	I0429 20:05:56.608557   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Getting domain xml...
	I0429 20:05:56.609325   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Creating domain...
	I0429 20:05:57.911657   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting to get IP...
	I0429 20:05:57.912705   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:57.913118   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:57.913211   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:57.913104   67743 retry.go:31] will retry after 298.590493ms: waiting for machine to come up
	I0429 20:05:58.213730   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.214424   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.214578   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:58.214487   67743 retry.go:31] will retry after 375.439886ms: waiting for machine to come up
	I0429 20:05:58.592145   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.592671   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.592700   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:58.592626   67743 retry.go:31] will retry after 432.890106ms: waiting for machine to come up
	I0429 20:05:59.027344   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.027782   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.027812   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:59.027732   67743 retry.go:31] will retry after 547.616894ms: waiting for machine to come up
	I0429 20:05:59.576555   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.577116   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.577140   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:59.577058   67743 retry.go:31] will retry after 662.088326ms: waiting for machine to come up
	I0429 20:06:00.240907   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.241712   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.241744   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:00.241667   67743 retry.go:31] will retry after 691.874394ms: waiting for machine to come up
	I0429 20:05:57.816218   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.079778   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:01.079817   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:01.079832   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.112008   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:01.112043   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:01.316358   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.322401   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:01.322437   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:01.815974   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.825156   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:01.825219   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:02.316473   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:02.328725   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:02.328763   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:02.816674   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:02.822826   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:02.822866   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:03.315863   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:03.323314   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:03.323366   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:03.816529   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:03.822521   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:03.822556   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:04.316336   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:04.325750   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0429 20:06:04.337308   66218 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:04.337348   66218 api_server.go:131] duration metric: took 7.02164287s to wait for apiserver health ...
	I0429 20:06:04.337361   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:06:04.337370   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:04.505344   66218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:00.520217   66615 crio.go:462] duration metric: took 2.099664395s to copy over tarball
	I0429 20:06:00.520314   66615 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:04.082476   66615 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562128598s)
	I0429 20:06:04.082527   66615 crio.go:469] duration metric: took 3.562271241s to extract the tarball
	I0429 20:06:04.082538   66615 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:04.129338   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:04.177683   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:06:04.177709   66615 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:06:04.177762   66615 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.177798   66615 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.177817   66615 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.177834   66615 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.177835   66615 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.177783   66615 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.177897   66615 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0429 20:06:04.177972   66615 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179282   66615 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.179360   66615 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.179361   66615 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.179320   66615 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.179331   66615 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.179299   66615 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0429 20:06:04.323997   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376145   66615 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0429 20:06:04.376210   66615 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376261   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.381592   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.420565   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0429 20:06:04.440670   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0429 20:06:04.461763   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.499283   66615 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0429 20:06:04.499347   66615 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0429 20:06:04.499404   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513860   66615 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0429 20:06:04.513900   66615 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.513946   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513988   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0429 20:06:04.548990   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.556713   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.556942   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.556965   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0429 20:06:04.566227   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.598982   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.656930   66615 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0429 20:06:04.656980   66615 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.657038   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.724922   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0429 20:06:04.725179   66615 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0429 20:06:04.725218   66615 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.725262   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732375   66615 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0429 20:06:04.732429   66615 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.732482   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732492   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.732483   66615 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0429 20:06:04.732669   66615 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.732726   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.735419   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.739785   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.742496   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.834684   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0429 20:06:04.834754   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0429 20:06:04.834811   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0429 20:06:04.847076   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0429 20:06:00.935382   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.935935   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.935979   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:00.935902   67743 retry.go:31] will retry after 1.024898519s: waiting for machine to come up
	I0429 20:06:01.962446   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:01.963109   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:01.963140   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:01.963059   67743 retry.go:31] will retry after 1.19225855s: waiting for machine to come up
	I0429 20:06:03.157257   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:03.157781   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:03.157843   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:03.157738   67743 retry.go:31] will retry after 1.699779549s: waiting for machine to come up
	I0429 20:06:04.859190   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:04.859622   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:04.859670   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:04.859565   67743 retry.go:31] will retry after 2.307475318s: waiting for machine to come up
	I0429 20:06:04.671477   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:04.684650   66218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:04.718146   66218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:04.908181   66218 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:04.908213   66218 system_pods.go:61] "coredns-7db6d8ff4d-d4kwk" [215ff4b8-3ae5-49a7-8a9f-6acb4d176b93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:04.908223   66218 system_pods.go:61] "etcd-no-preload-456788" [3ec7e177-1b68-4bff-aa4d-803f5346e1be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:04.908231   66218 system_pods.go:61] "kube-apiserver-no-preload-456788" [5e8bf0b0-9669-4f0c-8da1-523589158b16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:04.908236   66218 system_pods.go:61] "kube-controller-manager-no-preload-456788" [515363f7-bde1-4ba7-a5a9-6779f673afaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:04.908240   66218 system_pods.go:61] "kube-proxy-slnph" [29f503bf-ce19-425c-8174-2b8e7b27a424] Running
	I0429 20:06:04.908253   66218 system_pods.go:61] "kube-scheduler-no-preload-456788" [4f394af0-6452-49dd-9770-7c6bfcff3936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:04.908258   66218 system_pods.go:61] "metrics-server-569cc877fc-6mpnm" [5f183615-a243-410a-a524-ebdaa65e6400] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:04.908262   66218 system_pods.go:61] "storage-provisioner" [f74a777d-a3d7-4682-bad0-44bb993a2d43] Running
	I0429 20:06:04.908270   66218 system_pods.go:74] duration metric: took 190.098153ms to wait for pod list to return data ...
	I0429 20:06:04.908278   66218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:05.212876   66218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:05.212913   66218 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:05.212929   66218 node_conditions.go:105] duration metric: took 304.645545ms to run NodePressure ...
	I0429 20:06:05.212950   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:05.913252   66218 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:05.928914   66218 kubeadm.go:733] kubelet initialised
	I0429 20:06:05.928947   66218 kubeadm.go:734] duration metric: took 15.668535ms waiting for restarted kubelet to initialise ...
	I0429 20:06:05.928957   66218 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:05.937357   66218 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:05.091766   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:05.269730   66615 cache_images.go:92] duration metric: took 1.092006107s to LoadCachedImages
	W0429 20:06:05.269839   66615 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0429 20:06:05.269857   66615 kubeadm.go:928] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I0429 20:06:05.269988   66615 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-919612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:05.270088   66615 ssh_runner.go:195] Run: crio config
	I0429 20:06:05.322439   66615 cni.go:84] Creating CNI manager for ""
	I0429 20:06:05.322471   66615 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:05.322486   66615 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:05.322522   66615 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-919612 NodeName:old-k8s-version-919612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 20:06:05.322746   66615 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-919612"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:05.322810   66615 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 20:06:05.340981   66615 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:05.341058   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:05.357048   66615 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0429 20:06:05.384352   66615 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:05.407887   66615 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0429 20:06:05.431531   66615 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:05.437567   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:05.457652   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:05.610358   66615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:05.641538   66615 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612 for IP: 192.168.72.240
	I0429 20:06:05.641568   66615 certs.go:194] generating shared ca certs ...
	I0429 20:06:05.641583   66615 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:05.641758   66615 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:05.641831   66615 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:05.641843   66615 certs.go:256] generating profile certs ...
	I0429 20:06:05.641948   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.key
	I0429 20:06:05.642020   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key.5df5e618
	I0429 20:06:05.642083   66615 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key
	I0429 20:06:05.642256   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:05.642304   66615 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:05.642325   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:05.642364   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:05.642401   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:05.642435   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:05.642489   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:05.643156   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:05.691350   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:05.734434   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:05.773056   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:05.819778   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 20:06:05.868256   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:05.911589   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:05.957714   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 20:06:06.002120   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:06.039736   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:06.079636   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:06.118317   66615 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:06.145932   66615 ssh_runner.go:195] Run: openssl version
	I0429 20:06:06.152970   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:06.166609   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.171939   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.172033   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.179153   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:06.193491   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:06.207800   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214803   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214876   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.222154   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:06.236908   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:06.254197   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260797   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260863   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.267635   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:06.282727   66615 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:06.289580   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:06.301014   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:06.310503   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:06.318708   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:06.325718   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:06.332690   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:06.339914   66615 kubeadm.go:391] StartCluster: {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:06.340012   66615 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:06.340069   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.391511   66615 cri.go:89] found id: ""
	I0429 20:06:06.391618   66615 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:06.408955   66615 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:06.408985   66615 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:06.408991   66615 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:06.409060   66615 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:06.425276   66615 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:06.426397   66615 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-919612" does not appear in /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:06.427298   66615 kubeconfig.go:62] /home/jenkins/minikube-integration/18774-7754/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-919612" cluster setting kubeconfig missing "old-k8s-version-919612" context setting]
	I0429 20:06:06.428287   66615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:06.429908   66615 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:06.443630   66615 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.240
	I0429 20:06:06.443674   66615 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:06.443686   66615 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:06.443753   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.486251   66615 cri.go:89] found id: ""
	I0429 20:06:06.486339   66615 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:06.507136   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:06.523798   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:06.523828   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:06.523887   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:06.536668   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:06.536735   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:06.547800   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:06.560435   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:06.560517   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:06.572227   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.582772   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:06.582825   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.594168   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:06.605940   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:06.606013   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:06.621829   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:06.637520   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:06.779910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:07.921143   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.141191032s)
	I0429 20:06:07.921178   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.172381   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.276243   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.398312   66615 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:08.398424   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:08.899388   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.399344   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.898731   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:07.168679   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:07.169214   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:07.169264   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:07.169146   67743 retry.go:31] will retry after 2.050354993s: waiting for machine to come up
	I0429 20:06:09.221915   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:09.222545   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:09.222581   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:09.222449   67743 retry.go:31] will retry after 2.544889222s: waiting for machine to come up
	I0429 20:06:07.947247   66218 pod_ready.go:102] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:10.449364   66218 pod_ready.go:102] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:10.943731   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:10.943754   66218 pod_ready.go:81] duration metric: took 5.006367348s for pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:10.943763   66218 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.453825   66218 pod_ready.go:92] pod "etcd-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.453853   66218 pod_ready.go:81] duration metric: took 1.510082371s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.453865   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.462971   66218 pod_ready.go:92] pod "kube-apiserver-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.462997   66218 pod_ready.go:81] duration metric: took 9.123374ms for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.463011   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.471032   66218 pod_ready.go:92] pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.471066   66218 pod_ready.go:81] duration metric: took 8.024113ms for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.471077   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-slnph" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.478671   66218 pod_ready.go:92] pod "kube-proxy-slnph" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.478695   66218 pod_ready.go:81] duration metric: took 7.609313ms for pod "kube-proxy-slnph" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.478706   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.542851   66218 pod_ready.go:92] pod "kube-scheduler-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.542875   66218 pod_ready.go:81] duration metric: took 64.16109ms for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.542888   66218 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:10.399055   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:10.898742   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.399250   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.898511   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.399301   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.899399   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.399242   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.899417   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.398526   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.898976   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.768576   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:11.768967   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:11.769003   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:11.768924   67743 retry.go:31] will retry after 3.829285986s: waiting for machine to come up
	I0429 20:06:17.032004   65980 start.go:364] duration metric: took 56.727982697s to acquireMachinesLock for "embed-certs-161370"
	I0429 20:06:17.032074   65980 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:06:17.032085   65980 fix.go:54] fixHost starting: 
	I0429 20:06:17.032452   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:17.032485   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:17.050767   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0429 20:06:17.051181   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:17.051655   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:06:17.051680   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:17.052002   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:17.052188   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:17.052363   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:06:17.053975   65980 fix.go:112] recreateIfNeeded on embed-certs-161370: state=Stopped err=<nil>
	I0429 20:06:17.054002   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	W0429 20:06:17.054167   65980 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:06:17.056054   65980 out.go:177] * Restarting existing kvm2 VM for "embed-certs-161370" ...
	I0429 20:06:14.550615   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:17.050288   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:17.057452   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Start
	I0429 20:06:17.057630   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring networks are active...
	I0429 20:06:17.058381   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring network default is active
	I0429 20:06:17.058680   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring network mk-embed-certs-161370 is active
	I0429 20:06:17.059024   65980 main.go:141] libmachine: (embed-certs-161370) Getting domain xml...
	I0429 20:06:17.059697   65980 main.go:141] libmachine: (embed-certs-161370) Creating domain...
	I0429 20:06:15.599423   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.599897   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has current primary IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.599915   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Found IP for machine: 192.168.61.106
	I0429 20:06:15.599929   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Reserving static IP address...
	I0429 20:06:15.600318   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Reserved static IP address: 192.168.61.106
	I0429 20:06:15.600360   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-866143", mac: "52:54:00:af:de:09", ip: "192.168.61.106"} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.600375   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for SSH to be available...
	I0429 20:06:15.600405   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | skip adding static IP to network mk-default-k8s-diff-port-866143 - found existing host DHCP lease matching {name: "default-k8s-diff-port-866143", mac: "52:54:00:af:de:09", ip: "192.168.61.106"}
	I0429 20:06:15.600423   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Getting to WaitForSSH function...
	I0429 20:06:15.602983   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.603379   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.603414   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.603581   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Using SSH client type: external
	I0429 20:06:15.603611   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa (-rw-------)
	I0429 20:06:15.603675   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:06:15.603701   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | About to run SSH command:
	I0429 20:06:15.603733   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | exit 0
	I0429 20:06:15.734933   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | SSH cmd err, output: <nil>: 
	I0429 20:06:15.735306   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetConfigRaw
	I0429 20:06:15.735918   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:15.738878   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.739349   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.739385   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.739745   66875 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:06:15.739943   66875 machine.go:94] provisionDockerMachine start ...
	I0429 20:06:15.739966   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:15.740215   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.742731   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.743068   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.743097   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.743253   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.743448   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.743592   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.743729   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.743859   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.744066   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.744080   66875 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:06:15.855258   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:06:15.855292   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:15.855585   66875 buildroot.go:166] provisioning hostname "default-k8s-diff-port-866143"
	I0429 20:06:15.855604   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:15.855792   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.858278   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.858644   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.858672   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.858802   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.858996   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.859179   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.859327   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.859498   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.859667   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.859682   66875 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-866143 && echo "default-k8s-diff-port-866143" | sudo tee /etc/hostname
	I0429 20:06:15.986031   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-866143
	
	I0429 20:06:15.986094   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.989211   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.989633   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.989666   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.989858   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.990078   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.990281   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.990441   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.990591   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.990746   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.990763   66875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-866143' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-866143/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-866143' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:06:16.119358   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:06:16.119389   66875 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:06:16.119420   66875 buildroot.go:174] setting up certificates
	I0429 20:06:16.119431   66875 provision.go:84] configureAuth start
	I0429 20:06:16.119442   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:16.119741   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:16.122611   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.122991   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.123016   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.123180   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.125378   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.125673   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.125713   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.125805   66875 provision.go:143] copyHostCerts
	I0429 20:06:16.125883   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:06:16.125896   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:06:16.125963   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:06:16.126112   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:06:16.126125   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:06:16.126152   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:06:16.126234   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:06:16.126245   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:06:16.126270   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:06:16.126348   66875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-866143 san=[127.0.0.1 192.168.61.106 default-k8s-diff-port-866143 localhost minikube]
	I0429 20:06:16.280583   66875 provision.go:177] copyRemoteCerts
	I0429 20:06:16.280641   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:06:16.280665   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.283452   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.283760   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.283800   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.283999   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.284175   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.284335   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.284428   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:16.374564   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:06:16.408695   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0429 20:06:16.441975   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:06:16.470921   66875 provision.go:87] duration metric: took 351.479703ms to configureAuth
	I0429 20:06:16.470946   66875 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:06:16.471124   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:16.471205   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.473799   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.474105   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.474139   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.474291   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.474502   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.474692   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.474830   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.474995   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:16.475152   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:16.475167   66875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:06:16.774044   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:06:16.774093   66875 machine.go:97] duration metric: took 1.034135495s to provisionDockerMachine
	I0429 20:06:16.774108   66875 start.go:293] postStartSetup for "default-k8s-diff-port-866143" (driver="kvm2")
	I0429 20:06:16.774123   66875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:06:16.774148   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:16.774509   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:06:16.774539   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.777163   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.777603   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.777639   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.777779   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.777949   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.778109   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.778259   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:16.866104   66875 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:06:16.870760   66875 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:06:16.870780   66875 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:06:16.870839   66875 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:06:16.870916   66875 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:06:16.871003   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:06:16.881137   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:16.911284   66875 start.go:296] duration metric: took 137.163661ms for postStartSetup
	I0429 20:06:16.911318   66875 fix.go:56] duration metric: took 20.332102679s for fixHost
	I0429 20:06:16.911337   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.914440   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.914810   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.914838   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.915087   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.915287   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.915511   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.915692   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.915886   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:16.916034   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:16.916045   66875 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:06:17.031867   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421177.003309274
	
	I0429 20:06:17.031892   66875 fix.go:216] guest clock: 1714421177.003309274
	I0429 20:06:17.031900   66875 fix.go:229] Guest: 2024-04-29 20:06:17.003309274 +0000 UTC Remote: 2024-04-29 20:06:16.911322778 +0000 UTC m=+211.453402116 (delta=91.986496ms)
	I0429 20:06:17.031921   66875 fix.go:200] guest clock delta is within tolerance: 91.986496ms
	I0429 20:06:17.031928   66875 start.go:83] releasing machines lock for "default-k8s-diff-port-866143", held for 20.452741912s
	I0429 20:06:17.031957   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.032261   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:17.035096   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.035467   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.035497   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.035620   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036246   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036425   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036515   66875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:06:17.036569   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:17.036698   66875 ssh_runner.go:195] Run: cat /version.json
	I0429 20:06:17.036726   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:17.039300   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039595   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039813   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.039848   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039907   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:17.039984   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.040017   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.040069   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:17.040172   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:17.040230   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:17.040329   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:17.040382   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:17.040483   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:17.040636   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:17.137510   66875 ssh_runner.go:195] Run: systemctl --version
	I0429 20:06:17.160834   66875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:06:17.320792   66875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:06:17.328367   66875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:06:17.328448   66875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:06:17.349698   66875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:06:17.349724   66875 start.go:494] detecting cgroup driver to use...
	I0429 20:06:17.349807   66875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:06:17.372156   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:06:17.388142   66875 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:06:17.388206   66875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:06:17.406108   66875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:06:17.422323   66875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:06:17.555079   66875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:06:17.727126   66875 docker.go:233] disabling docker service ...
	I0429 20:06:17.727194   66875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:06:17.743136   66875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:06:17.757045   66875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:06:17.885705   66875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:06:18.021993   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:06:18.039020   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:06:18.063267   66875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:06:18.063330   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.076473   66875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:06:18.076545   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.089566   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.102912   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.116940   66875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:06:18.130940   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.150505   66875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.177724   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.191088   66875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:06:18.203560   66875 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:06:18.203635   66875 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:06:18.221087   66875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:06:18.233719   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:18.383406   66875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:06:18.543941   66875 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:06:18.544029   66875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:06:18.550828   66875 start.go:562] Will wait 60s for crictl version
	I0429 20:06:18.550891   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:06:18.556158   66875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:06:18.607004   66875 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:06:18.607083   66875 ssh_runner.go:195] Run: crio --version
	I0429 20:06:18.638282   66875 ssh_runner.go:195] Run: crio --version
	I0429 20:06:18.674135   66875 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:06:15.399474   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:15.899352   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.899106   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.899205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.399351   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.899319   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.399303   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.898824   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.675590   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:18.678673   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:18.679055   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:18.679096   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:18.679272   66875 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0429 20:06:18.685110   66875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:18.705804   66875 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:06:18.705967   66875 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:06:18.706036   66875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:18.750754   66875 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:06:18.750823   66875 ssh_runner.go:195] Run: which lz4
	I0429 20:06:18.755893   66875 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:06:18.760892   66875 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:06:18.760921   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:06:19.055680   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:21.552080   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:18.301855   65980 main.go:141] libmachine: (embed-certs-161370) Waiting to get IP...
	I0429 20:06:18.302804   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.303231   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.303273   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.303198   67921 retry.go:31] will retry after 279.123731ms: waiting for machine to come up
	I0429 20:06:18.584013   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.584661   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.584703   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.584630   67921 retry.go:31] will retry after 239.910483ms: waiting for machine to come up
	I0429 20:06:18.825978   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.826393   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.826425   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.826349   67921 retry.go:31] will retry after 312.324444ms: waiting for machine to come up
	I0429 20:06:19.139999   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:19.140583   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:19.140611   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:19.140535   67921 retry.go:31] will retry after 498.525047ms: waiting for machine to come up
	I0429 20:06:19.640244   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:19.640797   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:19.640828   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:19.640756   67921 retry.go:31] will retry after 479.301061ms: waiting for machine to come up
	I0429 20:06:20.121396   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:20.121982   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:20.122015   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:20.121941   67921 retry.go:31] will retry after 706.389673ms: waiting for machine to come up
	I0429 20:06:20.829691   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:20.830191   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:20.830247   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:20.830166   67921 retry.go:31] will retry after 1.145397308s: waiting for machine to come up
	I0429 20:06:21.977290   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:21.977747   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:21.977779   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:21.977691   67921 retry.go:31] will retry after 955.977029ms: waiting for machine to come up
	I0429 20:06:20.399233   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.898571   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.398855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.898885   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.399328   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.398965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.899248   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.398833   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.899039   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.561047   66875 crio.go:462] duration metric: took 1.805186908s to copy over tarball
	I0429 20:06:20.561137   66875 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:23.264543   66875 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.703371921s)
	I0429 20:06:23.264573   66875 crio.go:469] duration metric: took 2.7034954s to extract the tarball
	I0429 20:06:23.264581   66875 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:23.303558   66875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:23.356825   66875 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 20:06:23.356854   66875 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:06:23.356873   66875 kubeadm.go:928] updating node { 192.168.61.106 8444 v1.30.0 crio true true} ...
	I0429 20:06:23.357007   66875 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-866143 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:23.357105   66875 ssh_runner.go:195] Run: crio config
	I0429 20:06:23.414195   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:06:23.414225   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:23.414237   66875 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:23.414267   66875 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.106 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-866143 NodeName:default-k8s-diff-port-866143 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:06:23.414459   66875 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.106
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-866143"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:23.414524   66875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:06:23.425977   66875 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:23.426089   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:23.437270   66875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0429 20:06:23.457613   66875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:23.479383   66875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0429 20:06:23.509517   66875 ssh_runner.go:195] Run: grep 192.168.61.106	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:23.514202   66875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:23.528721   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:23.666941   66875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:23.687710   66875 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143 for IP: 192.168.61.106
	I0429 20:06:23.687745   66875 certs.go:194] generating shared ca certs ...
	I0429 20:06:23.687768   66875 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:23.687952   66875 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:23.688005   66875 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:23.688020   66875 certs.go:256] generating profile certs ...
	I0429 20:06:23.688168   66875 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.key
	I0429 20:06:23.688260   66875 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.key.5d7fbd4b
	I0429 20:06:23.688318   66875 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.key
	I0429 20:06:23.688481   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:23.688532   66875 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:23.688548   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:23.688592   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:23.688628   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:23.688663   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:23.688722   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:23.689611   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:23.743834   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:23.783115   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:23.819086   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:23.850794   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0429 20:06:23.882477   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:23.918607   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:23.947837   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:06:23.977241   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:24.005902   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:24.034910   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:24.064119   66875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:24.083879   66875 ssh_runner.go:195] Run: openssl version
	I0429 20:06:24.090651   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:24.104929   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.110955   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.111034   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.117914   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:24.131076   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:24.144790   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.150842   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.150926   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.157842   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:24.171737   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:24.186164   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.191924   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.191995   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.199385   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:24.213392   66875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:24.219369   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:24.226784   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:24.234655   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:24.242406   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:24.249904   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:24.257400   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:24.264165   66875 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:24.264290   66875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:24.264353   66875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:24.310126   66875 cri.go:89] found id: ""
	I0429 20:06:24.310197   66875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:24.322134   66875 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:24.322155   66875 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:24.322160   66875 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:24.322223   66875 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:24.337713   66875 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:24.339184   66875 kubeconfig.go:125] found "default-k8s-diff-port-866143" server: "https://192.168.61.106:8444"
	I0429 20:06:24.342237   66875 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:24.353500   66875 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.106
	I0429 20:06:24.353545   66875 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:24.353560   66875 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:24.353627   66875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:24.399835   66875 cri.go:89] found id: ""
	I0429 20:06:24.399918   66875 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:24.426456   66875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:24.440261   66875 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:24.440282   66875 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:24.440376   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0429 20:06:24.450699   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:24.450766   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:24.462870   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0429 20:06:24.474894   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:24.474961   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:24.488607   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0429 20:06:24.499626   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:24.499685   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:24.514156   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0429 20:06:24.525958   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:24.526018   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:24.537063   66875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:24.548503   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:24.687916   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:24.051367   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:26.550970   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:22.935362   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:22.935797   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:22.935827   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:22.935746   67921 retry.go:31] will retry after 1.25494649s: waiting for machine to come up
	I0429 20:06:24.192017   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:24.192613   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:24.192641   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:24.192556   67921 retry.go:31] will retry after 1.641885834s: waiting for machine to come up
	I0429 20:06:25.836686   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:25.837170   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:25.837193   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:25.837125   67921 retry.go:31] will retry after 2.794216099s: waiting for machine to come up
	I0429 20:06:25.398515   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:25.898944   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.399360   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.899294   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.399520   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.899434   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.398734   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.898479   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.399413   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.899236   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.234143   66875 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.546180467s)
	I0429 20:06:26.234181   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.502030   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.577778   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.689836   66875 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:26.689982   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.190231   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.690207   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.729434   66875 api_server.go:72] duration metric: took 1.039599386s to wait for apiserver process to appear ...
	I0429 20:06:27.729473   66875 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:06:27.729497   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:27.730016   66875 api_server.go:269] stopped: https://192.168.61.106:8444/healthz: Get "https://192.168.61.106:8444/healthz": dial tcp 192.168.61.106:8444: connect: connection refused
	I0429 20:06:28.230353   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:28.551049   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:31.051387   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:31.411151   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:31.411188   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:31.411205   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:31.424074   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:31.424106   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:31.729916   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:31.737269   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:31.737299   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:32.229834   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:32.237900   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:32.237935   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:32.730529   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:32.735043   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 200:
	ok
	I0429 20:06:32.743999   66875 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:32.744026   66875 api_server.go:131] duration metric: took 5.014546615s to wait for apiserver health ...
	I0429 20:06:32.744035   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:06:32.744041   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:32.745889   66875 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:28.633451   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:28.633950   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:28.633979   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:28.633906   67921 retry.go:31] will retry after 2.251092878s: waiting for machine to come up
	I0429 20:06:30.887722   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:30.888251   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:30.888283   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:30.888208   67921 retry.go:31] will retry after 2.941721217s: waiting for machine to come up
	I0429 20:06:32.747198   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:32.760578   66875 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:32.786719   66875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:32.797795   66875 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:32.797830   66875 system_pods.go:61] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:32.797838   66875 system_pods.go:61] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:32.797844   66875 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:32.797854   66875 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:32.797859   66875 system_pods.go:61] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 20:06:32.797866   66875 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:32.797873   66875 system_pods.go:61] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:32.797880   66875 system_pods.go:61] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 20:06:32.797888   66875 system_pods.go:74] duration metric: took 11.14839ms to wait for pod list to return data ...
	I0429 20:06:32.797902   66875 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:32.801888   66875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:32.801909   66875 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:32.801918   66875 node_conditions.go:105] duration metric: took 4.010782ms to run NodePressure ...
	I0429 20:06:32.801934   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:33.088679   66875 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:33.094165   66875 kubeadm.go:733] kubelet initialised
	I0429 20:06:33.094185   66875 kubeadm.go:734] duration metric: took 5.479589ms waiting for restarted kubelet to initialise ...
	I0429 20:06:33.094192   66875 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:33.101524   66875 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.106879   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.106911   66875 pod_ready.go:81] duration metric: took 5.352162ms for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.106923   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.106946   66875 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.111446   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.111469   66875 pod_ready.go:81] duration metric: took 4.507858ms for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.111478   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.111483   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.115613   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.115643   66875 pod_ready.go:81] duration metric: took 4.152743ms for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.115654   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.115663   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.191660   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.191695   66875 pod_ready.go:81] duration metric: took 76.012388ms for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.191707   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.191713   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.592489   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-proxy-zddtx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.592522   66875 pod_ready.go:81] duration metric: took 400.801861ms for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.592535   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-proxy-zddtx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.592544   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.990624   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.990655   66875 pod_ready.go:81] duration metric: took 398.101779ms for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.990667   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.990673   66875 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:34.391120   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:34.391148   66875 pod_ready.go:81] duration metric: took 400.467456ms for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:34.391165   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:34.391173   66875 pod_ready.go:38] duration metric: took 1.296972775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:34.391191   66875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:06:34.408817   66875 ops.go:34] apiserver oom_adj: -16
	I0429 20:06:34.408845   66875 kubeadm.go:591] duration metric: took 10.086677852s to restartPrimaryControlPlane
	I0429 20:06:34.408856   66875 kubeadm.go:393] duration metric: took 10.144698168s to StartCluster
	I0429 20:06:34.408876   66875 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:34.408961   66875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:34.411093   66875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:34.411379   66875 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:06:34.413055   66875 out.go:177] * Verifying Kubernetes components...
	I0429 20:06:34.411518   66875 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:06:34.411607   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:34.414229   66875 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414239   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:34.414261   66875 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-866143"
	I0429 20:06:34.414238   66875 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414232   66875 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414341   66875 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.414355   66875 addons.go:243] addon metrics-server should already be in state true
	I0429 20:06:34.414382   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.414381   66875 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.414396   66875 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:06:34.414439   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.414650   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414677   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.414746   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414758   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.414890   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414923   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.433279   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I0429 20:06:34.433827   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.434444   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.434474   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.434873   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.435436   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.435483   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.435739   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
	I0429 20:06:34.435746   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0429 20:06:34.436117   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.436245   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.436638   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.436678   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.436734   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.436747   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.437011   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.437057   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.437218   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.437601   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.437630   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.441092   66875 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.441118   66875 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:06:34.441146   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.441550   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.441582   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.451571   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0429 20:06:34.452041   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.452627   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.452650   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.453080   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.453401   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.455145   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0429 20:06:34.455335   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.457339   66875 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:34.455992   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.456826   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I0429 20:06:34.458912   66875 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:06:34.458925   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:06:34.458942   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.459155   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.459818   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.459836   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.460050   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.460068   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.460196   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.460406   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.460450   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.461005   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.461051   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.462529   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.462624   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.464140   66875 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:06:30.398730   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:30.898542   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.399309   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.898751   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.399374   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.899262   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.398723   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.899281   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.399356   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.899305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.463014   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.463255   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.465585   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.465598   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:06:34.465623   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:06:34.465652   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.465703   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.465892   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.466043   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.468951   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.469342   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.469407   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.469645   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.469817   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.469984   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.470137   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.484411   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0429 20:06:34.484864   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.485366   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.485396   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.485759   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.485937   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.487715   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.487962   66875 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:06:34.487975   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:06:34.487989   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.490407   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.490724   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.490748   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.490890   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.491045   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.491146   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.491274   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.618088   66875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:34.638582   66875 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-866143" to be "Ready" ...
	I0429 20:06:34.729046   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:06:34.729633   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:06:34.729649   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:06:34.752200   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:06:34.770107   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:06:34.770143   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:06:34.847081   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:06:34.847117   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:06:34.889992   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:06:35.821090   66875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091986938s)
	I0429 20:06:35.821127   66875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.068905753s)
	I0429 20:06:35.821145   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821150   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821157   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821162   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821490   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821505   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821514   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821524   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821528   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821534   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821549   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821540   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821902   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821923   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821936   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.822007   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.822024   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.828303   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.828348   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.828591   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.828606   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.828632   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.843540   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.843566   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.843860   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.843877   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.843886   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.843894   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.844127   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.844170   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.844188   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.844203   66875 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-866143"
	I0429 20:06:35.846214   66875 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0429 20:06:33.549917   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:35.550564   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:33.831181   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:33.831552   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:33.831581   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:33.831506   67921 retry.go:31] will retry after 5.040485428s: waiting for machine to come up
	I0429 20:06:35.399419   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.899244   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.398934   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.898847   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.399273   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.899102   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.398748   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.399524   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.898813   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.847674   66875 addons.go:505] duration metric: took 1.436173952s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0429 20:06:36.641963   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:38.642738   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:38.873188   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.873625   65980 main.go:141] libmachine: (embed-certs-161370) Found IP for machine: 192.168.50.184
	I0429 20:06:38.873653   65980 main.go:141] libmachine: (embed-certs-161370) Reserving static IP address...
	I0429 20:06:38.873669   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has current primary IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.874037   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "embed-certs-161370", mac: "52:54:00:e6:05:1f", ip: "192.168.50.184"} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:38.874091   65980 main.go:141] libmachine: (embed-certs-161370) Reserved static IP address: 192.168.50.184
	I0429 20:06:38.874113   65980 main.go:141] libmachine: (embed-certs-161370) DBG | skip adding static IP to network mk-embed-certs-161370 - found existing host DHCP lease matching {name: "embed-certs-161370", mac: "52:54:00:e6:05:1f", ip: "192.168.50.184"}
	I0429 20:06:38.874132   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Getting to WaitForSSH function...
	I0429 20:06:38.874151   65980 main.go:141] libmachine: (embed-certs-161370) Waiting for SSH to be available...
	I0429 20:06:38.875891   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.876205   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:38.876237   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.876401   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Using SSH client type: external
	I0429 20:06:38.876425   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa (-rw-------)
	I0429 20:06:38.876455   65980 main.go:141] libmachine: (embed-certs-161370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:06:38.876475   65980 main.go:141] libmachine: (embed-certs-161370) DBG | About to run SSH command:
	I0429 20:06:38.876486   65980 main.go:141] libmachine: (embed-certs-161370) DBG | exit 0
	I0429 20:06:39.006684   65980 main.go:141] libmachine: (embed-certs-161370) DBG | SSH cmd err, output: <nil>: 
	I0429 20:06:39.007072   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetConfigRaw
	I0429 20:06:39.007701   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:39.010189   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.010539   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.010577   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.010783   65980 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/config.json ...
	I0429 20:06:39.010970   65980 machine.go:94] provisionDockerMachine start ...
	I0429 20:06:39.010986   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:39.011196   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.013422   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.013832   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.013862   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.013986   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.014183   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.014377   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.014528   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.014710   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.014868   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.014878   65980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:06:39.119151   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:06:39.119183   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.119425   65980 buildroot.go:166] provisioning hostname "embed-certs-161370"
	I0429 20:06:39.119449   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.119606   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.122418   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.122725   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.122755   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.122894   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.123087   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.123235   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.123371   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.123547   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.123719   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.123734   65980 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-161370 && echo "embed-certs-161370" | sudo tee /etc/hostname
	I0429 20:06:39.247323   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-161370
	
	I0429 20:06:39.247360   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.250202   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.250594   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.250623   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.250761   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.250956   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.251158   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.251354   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.251536   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.251724   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.251746   65980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-161370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-161370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-161370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:06:39.370366   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:06:39.370395   65980 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:06:39.370415   65980 buildroot.go:174] setting up certificates
	I0429 20:06:39.370429   65980 provision.go:84] configureAuth start
	I0429 20:06:39.370441   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.370754   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:39.373600   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.373977   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.374011   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.374305   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.376654   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.376999   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.377032   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.377156   65980 provision.go:143] copyHostCerts
	I0429 20:06:39.377217   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:06:39.377228   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:06:39.377279   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:06:39.377367   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:06:39.377375   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:06:39.377393   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:06:39.377446   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:06:39.377453   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:06:39.377470   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:06:39.377523   65980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.embed-certs-161370 san=[127.0.0.1 192.168.50.184 embed-certs-161370 localhost minikube]
	I0429 20:06:39.441865   65980 provision.go:177] copyRemoteCerts
	I0429 20:06:39.441931   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:06:39.441954   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.445189   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.445633   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.445677   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.445918   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.446166   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.446364   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.446521   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:39.535703   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:06:39.571033   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0429 20:06:39.604181   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:06:39.639250   65980 provision.go:87] duration metric: took 268.808275ms to configureAuth
	I0429 20:06:39.639339   65980 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:06:39.639575   65980 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:39.639668   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.642544   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.642975   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.643006   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.643146   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.643348   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.643507   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.643671   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.643838   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.644011   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.644039   65980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:06:39.974134   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:06:39.974168   65980 machine.go:97] duration metric: took 963.184467ms to provisionDockerMachine
	I0429 20:06:39.974186   65980 start.go:293] postStartSetup for "embed-certs-161370" (driver="kvm2")
	I0429 20:06:39.974201   65980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:06:39.974229   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:39.974601   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:06:39.974636   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.977843   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.978295   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.978328   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.978528   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.978768   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.978939   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.979144   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.066379   65980 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:06:40.071720   65980 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:06:40.071742   65980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:06:40.071798   65980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:06:40.071875   65980 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:06:40.071965   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:06:40.082556   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:40.112774   65980 start.go:296] duration metric: took 138.571139ms for postStartSetup
	I0429 20:06:40.112827   65980 fix.go:56] duration metric: took 23.080734046s for fixHost
	I0429 20:06:40.112859   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.115931   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.116414   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.116448   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.116643   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.116859   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.117026   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.117169   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.117358   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:40.117560   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:40.117576   65980 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:06:40.223697   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421200.206855033
	
	I0429 20:06:40.223722   65980 fix.go:216] guest clock: 1714421200.206855033
	I0429 20:06:40.223732   65980 fix.go:229] Guest: 2024-04-29 20:06:40.206855033 +0000 UTC Remote: 2024-04-29 20:06:40.112832003 +0000 UTC m=+362.399028562 (delta=94.02303ms)
	I0429 20:06:40.223777   65980 fix.go:200] guest clock delta is within tolerance: 94.02303ms
	I0429 20:06:40.223782   65980 start.go:83] releasing machines lock for "embed-certs-161370", held for 23.191744513s
	I0429 20:06:40.223804   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.224106   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:40.226904   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.227299   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.227328   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.227462   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.227955   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.228117   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.228199   65980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:06:40.228238   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.228353   65980 ssh_runner.go:195] Run: cat /version.json
	I0429 20:06:40.228378   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.230943   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231151   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231370   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.231401   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231585   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.231595   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.231629   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231794   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.231806   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.231982   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.232000   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.232182   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.232197   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.232303   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.337533   65980 ssh_runner.go:195] Run: systemctl --version
	I0429 20:06:40.347252   65980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:06:40.494668   65980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:06:40.502707   65980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:06:40.502788   65980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:06:40.522261   65980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:06:40.522298   65980 start.go:494] detecting cgroup driver to use...
	I0429 20:06:40.522368   65980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:06:40.540576   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:06:40.557130   65980 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:06:40.557203   65980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:06:40.573803   65980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:06:40.589730   65980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:06:40.731625   65980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:06:40.902594   65980 docker.go:233] disabling docker service ...
	I0429 20:06:40.902665   65980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:06:40.921454   65980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:06:40.938734   65980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:06:41.081822   65980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:06:41.237778   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:06:41.254086   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:06:41.276277   65980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:06:41.276362   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.288903   65980 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:06:41.288972   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.301347   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.313639   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.325885   65980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:06:41.338215   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.350839   65980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.372124   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.385505   65980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:06:41.397626   65980 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:06:41.397704   65980 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:06:41.413915   65980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:06:41.427068   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:41.575690   65980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:06:41.748047   65980 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:06:41.748132   65980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:06:41.753313   65980 start.go:562] Will wait 60s for crictl version
	I0429 20:06:41.753379   65980 ssh_runner.go:195] Run: which crictl
	I0429 20:06:41.757672   65980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:06:41.794045   65980 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:06:41.794150   65980 ssh_runner.go:195] Run: crio --version
	I0429 20:06:41.831177   65980 ssh_runner.go:195] Run: crio --version
	I0429 20:06:41.865125   65980 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:06:38.049006   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:40.050003   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:42.050213   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:41.866698   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:41.869477   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:41.869815   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:41.869848   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:41.870107   65980 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0429 20:06:41.874917   65980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:41.889196   65980 kubeadm.go:877] updating cluster {Name:embed-certs-161370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:06:41.889353   65980 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:06:41.889423   65980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:41.936285   65980 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:06:41.936352   65980 ssh_runner.go:195] Run: which lz4
	I0429 20:06:41.941893   65980 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:06:41.947071   65980 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:06:41.947112   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:06:40.399024   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:40.899056   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.399275   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.899285   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.399200   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.899243   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.398590   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.899346   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.143962   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:41.645981   66875 node_ready.go:49] node "default-k8s-diff-port-866143" has status "Ready":"True"
	I0429 20:06:41.646007   66875 node_ready.go:38] duration metric: took 7.007388661s for node "default-k8s-diff-port-866143" to be "Ready" ...
	I0429 20:06:41.646018   66875 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:41.652664   66875 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.657667   66875 pod_ready.go:92] pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.657685   66875 pod_ready.go:81] duration metric: took 4.993051ms for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.657694   66875 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.662632   66875 pod_ready.go:92] pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.662650   66875 pod_ready.go:81] duration metric: took 4.950519ms for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.662658   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.667488   66875 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.667509   66875 pod_ready.go:81] duration metric: took 4.844299ms for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.667520   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.672480   66875 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.672501   66875 pod_ready.go:81] duration metric: took 4.974639ms for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.672512   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:42.042828   66875 pod_ready.go:92] pod "kube-proxy-zddtx" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:42.042856   66875 pod_ready.go:81] duration metric: took 370.336555ms for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:42.042868   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.051930   66875 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:44.548970   66875 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:44.548999   66875 pod_ready.go:81] duration metric: took 2.506120519s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.549011   66875 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.051077   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:46.052233   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:43.759688   65980 crio.go:462] duration metric: took 1.817838869s to copy over tarball
	I0429 20:06:43.759784   65980 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:46.405802   65980 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.64598022s)
	I0429 20:06:46.405851   65980 crio.go:469] duration metric: took 2.646122331s to extract the tarball
	I0429 20:06:46.405861   65980 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:46.444700   65980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:46.503047   65980 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 20:06:46.503086   65980 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:06:46.503098   65980 kubeadm.go:928] updating node { 192.168.50.184 8443 v1.30.0 crio true true} ...
	I0429 20:06:46.503234   65980 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-161370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:46.503334   65980 ssh_runner.go:195] Run: crio config
	I0429 20:06:46.563489   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:06:46.563511   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:46.563523   65980 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:46.563542   65980 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-161370 NodeName:embed-certs-161370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:06:46.563662   65980 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-161370"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:46.563719   65980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:06:46.576288   65980 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:46.576350   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:46.586807   65980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0429 20:06:46.605883   65980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:46.626741   65980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0429 20:06:46.647223   65980 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:46.652262   65980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:46.667095   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:46.804937   65980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:46.831022   65980 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370 for IP: 192.168.50.184
	I0429 20:06:46.831048   65980 certs.go:194] generating shared ca certs ...
	I0429 20:06:46.831067   65980 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:46.831251   65980 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:46.831295   65980 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:46.831306   65980 certs.go:256] generating profile certs ...
	I0429 20:06:46.831385   65980 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/client.key
	I0429 20:06:46.831440   65980 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.key.9384fac7
	I0429 20:06:46.831476   65980 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.key
	I0429 20:06:46.831582   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:46.831610   65980 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:46.831617   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:46.831635   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:46.831662   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:46.831691   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:46.831729   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:46.832571   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:46.896363   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:46.939336   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:46.976256   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:47.007777   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0429 20:06:47.045019   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:47.079584   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:47.114002   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:06:47.142163   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:47.170063   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:47.199098   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:47.228985   65980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:47.250928   65980 ssh_runner.go:195] Run: openssl version
	I0429 20:06:47.258215   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:47.271653   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.277100   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.277183   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.283876   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:47.297519   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:47.311104   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.316347   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.316408   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.322992   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:47.337744   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:47.351332   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.356912   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.356964   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.363339   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:47.378501   65980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:47.383995   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:47.391157   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:47.398039   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:47.405117   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:47.412125   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:47.419257   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:47.425917   65980 kubeadm.go:391] StartCluster: {Name:embed-certs-161370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:47.426009   65980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:47.426049   65980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:47.469133   65980 cri.go:89] found id: ""
	I0429 20:06:47.469216   65980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:47.481852   65980 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:47.481878   65980 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:47.481883   65980 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:47.481926   65980 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:47.495254   65980 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:47.496760   65980 kubeconfig.go:125] found "embed-certs-161370" server: "https://192.168.50.184:8443"
	I0429 20:06:47.499898   65980 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:47.511866   65980 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.184
	I0429 20:06:47.511903   65980 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:47.511917   65980 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:47.511972   65980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:47.563879   65980 cri.go:89] found id: ""
	I0429 20:06:47.563956   65980 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:47.583490   65980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:47.595867   65980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:47.595893   65980 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:47.595947   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:47.608218   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:47.608283   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:47.620329   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:47.631394   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:47.631527   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:47.643107   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:47.654164   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:47.654233   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:47.665890   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:47.676817   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:47.676859   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:47.688608   65980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:47.700068   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:45.398908   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:45.898619   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.398795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.899058   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.399257   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.899269   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.398874   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.898653   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.399305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.898855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.556692   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:49.056546   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:48.550949   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:50.551905   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:47.821391   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.623284   65980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.31791052s)
	I0429 20:06:49.623343   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.870630   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.950525   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:50.061240   65980 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:50.061331   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.562165   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.062299   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.139853   65980 api_server.go:72] duration metric: took 1.078602354s to wait for apiserver process to appear ...
	I0429 20:06:51.139883   65980 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:06:51.139905   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:51.140472   65980 api_server.go:269] stopped: https://192.168.50.184:8443/healthz: Get "https://192.168.50.184:8443/healthz": dial tcp 192.168.50.184:8443: connect: connection refused
	I0429 20:06:51.640813   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:50.398577   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.899284   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.399361   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.899134   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.399211   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.898733   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.399280   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.898915   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.399264   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.898840   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.057650   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:53.559429   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:53.049570   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:55.049866   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:57.050558   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:54.540707   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:54.540765   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:54.540797   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:54.618982   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:54.619016   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:54.640333   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:54.674491   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:54.674542   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:55.140955   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:55.157479   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:55.157517   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:55.639999   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:55.646278   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:55.646311   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:56.140938   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:56.147336   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:56.147371   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:56.640927   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:56.647027   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:56.647054   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:57.140894   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:57.145193   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:57.145236   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:57.640842   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:57.645453   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:57.645478   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:58.140524   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:58.146317   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0429 20:06:58.153972   65980 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:58.154011   65980 api_server.go:131] duration metric: took 7.014120443s to wait for apiserver health ...
	I0429 20:06:58.154028   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:06:58.154036   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:58.155341   65980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:55.398622   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:55.898563   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.399306   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.898473   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.899278   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.399121   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.899291   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.399197   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.898901   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.056503   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:58.056988   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:59.053737   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:01.555480   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:58.156794   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:58.176870   65980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:58.215333   65980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:58.230619   65980 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:58.230655   65980 system_pods.go:61] "coredns-7db6d8ff4d-wjfff" [bd92e456-b538-49ae-984b-c6bcea6add30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:58.230667   65980 system_pods.go:61] "etcd-embed-certs-161370" [da2d022f-33c4-49b7-b997-a6783043f3e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:58.230675   65980 system_pods.go:61] "kube-apiserver-embed-certs-161370" [032913c9-bb91-46ba-ad4d-a4d5b63d806f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:58.230681   65980 system_pods.go:61] "kube-controller-manager-embed-certs-161370" [2f3ae1ff-0688-4c70-a888-5e1e640f64bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:58.230685   65980 system_pods.go:61] "kube-proxy-9kmg8" [01bbd2ca-24d2-4c7c-b4ea-79604ac3f486] Running
	I0429 20:06:58.230689   65980 system_pods.go:61] "kube-scheduler-embed-certs-161370" [c88ab7cc-1aef-48cb-814e-eff8e946885c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:58.230694   65980 system_pods.go:61] "metrics-server-569cc877fc-c4h7f" [bf1cae8d-cca1-4573-935f-e60118ca9575] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:58.230698   65980 system_pods.go:61] "storage-provisioner" [1686a084-f28b-4b26-b748-85a2a3613dde] Running
	I0429 20:06:58.230703   65980 system_pods.go:74] duration metric: took 15.348727ms to wait for pod list to return data ...
	I0429 20:06:58.230713   65980 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:58.233411   65980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:58.233436   65980 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:58.233447   65980 node_conditions.go:105] duration metric: took 2.729694ms to run NodePressure ...
	I0429 20:06:58.233466   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:58.532729   65980 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:58.538018   65980 kubeadm.go:733] kubelet initialised
	I0429 20:06:58.538038   65980 kubeadm.go:734] duration metric: took 5.283028ms waiting for restarted kubelet to initialise ...
	I0429 20:06:58.538046   65980 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:58.544267   65980 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:00.553501   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:00.398537   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.899359   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.399125   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.899428   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.399457   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.899355   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.399421   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.899376   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.399331   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.899263   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.555517   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:02.557429   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:05.056216   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:04.049941   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:06.051285   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:03.069330   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:03.554710   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.554732   65980 pod_ready.go:81] duration metric: took 5.010440873s for pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.554742   65980 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.562277   65980 pod_ready.go:92] pod "etcd-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.562298   65980 pod_ready.go:81] duration metric: took 7.550156ms for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.562306   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.567038   65980 pod_ready.go:92] pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.567060   65980 pod_ready.go:81] duration metric: took 4.748002ms for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.567069   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.573632   65980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.573664   65980 pod_ready.go:81] duration metric: took 1.006574407s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.573675   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9kmg8" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.578356   65980 pod_ready.go:92] pod "kube-proxy-9kmg8" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.578377   65980 pod_ready.go:81] duration metric: took 4.694437ms for pod "kube-proxy-9kmg8" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.578388   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.749703   65980 pod_ready.go:92] pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.749733   65980 pod_ready.go:81] duration metric: took 171.336391ms for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.749750   65980 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:06.757041   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:05.398458   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:05.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.399205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.399308   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.898749   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:08.399182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:08.399271   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:08.448015   66615 cri.go:89] found id: ""
	I0429 20:07:08.448041   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.448049   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:08.448055   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:08.448103   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:08.491239   66615 cri.go:89] found id: ""
	I0429 20:07:08.491265   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.491274   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:08.491280   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:08.491330   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:08.541203   66615 cri.go:89] found id: ""
	I0429 20:07:08.541226   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.541234   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:08.541239   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:08.541300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:08.584370   66615 cri.go:89] found id: ""
	I0429 20:07:08.584393   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.584401   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:08.584407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:08.584469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:08.625126   66615 cri.go:89] found id: ""
	I0429 20:07:08.625158   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.625169   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:08.625182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:08.625246   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:08.666987   66615 cri.go:89] found id: ""
	I0429 20:07:08.667018   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.667032   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:08.667039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:08.667105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:08.712363   66615 cri.go:89] found id: ""
	I0429 20:07:08.712394   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.712405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:08.712413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:08.712471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:08.762122   66615 cri.go:89] found id: ""
	I0429 20:07:08.762151   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.762170   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:08.762180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:08.762196   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:08.808218   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:08.808246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:08.867278   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:08.867317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:08.884230   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:08.884266   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:09.018183   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:09.018208   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:09.018224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:07.555443   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:09.557653   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:08.551823   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.051232   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:09.257687   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.758829   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.587112   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:11.603711   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:11.603781   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:11.651087   66615 cri.go:89] found id: ""
	I0429 20:07:11.651115   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.651123   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:11.651128   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:11.651192   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:11.691888   66615 cri.go:89] found id: ""
	I0429 20:07:11.691914   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.691921   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:11.691928   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:11.691976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:11.733411   66615 cri.go:89] found id: ""
	I0429 20:07:11.733441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.733452   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:11.733460   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:11.733517   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:11.774620   66615 cri.go:89] found id: ""
	I0429 20:07:11.774648   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.774659   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:11.774666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:11.774729   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:11.821410   66615 cri.go:89] found id: ""
	I0429 20:07:11.821441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.821449   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:11.821455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:11.821502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:11.864699   66615 cri.go:89] found id: ""
	I0429 20:07:11.864730   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.864741   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:11.864749   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:11.864809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:11.904637   66615 cri.go:89] found id: ""
	I0429 20:07:11.904678   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.904687   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:11.904693   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:11.904742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:11.970914   66615 cri.go:89] found id: ""
	I0429 20:07:11.970945   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.970957   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:11.970968   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:11.970984   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:12.024185   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:12.024226   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:12.040319   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:12.040349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:12.137888   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:12.137915   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:12.137941   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:12.210256   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:12.210290   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:14.758756   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:14.775321   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:14.775386   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:14.812637   66615 cri.go:89] found id: ""
	I0429 20:07:14.812662   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.812672   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:14.812679   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:14.812735   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:14.851503   66615 cri.go:89] found id: ""
	I0429 20:07:14.851536   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.851547   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:14.851554   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:14.851613   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:14.885708   66615 cri.go:89] found id: ""
	I0429 20:07:14.885739   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.885749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:14.885756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:14.885817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:14.926133   66615 cri.go:89] found id: ""
	I0429 20:07:14.926162   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.926173   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:14.926181   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:14.926240   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:12.056093   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.056500   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:13.549924   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:15.550544   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.257394   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:16.756833   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.967553   66615 cri.go:89] found id: ""
	I0429 20:07:14.967582   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.967593   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:14.967601   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:14.967659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:15.006174   66615 cri.go:89] found id: ""
	I0429 20:07:15.006199   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.006207   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:15.006218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:15.006293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:15.046916   66615 cri.go:89] found id: ""
	I0429 20:07:15.046940   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.046947   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:15.046953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:15.047009   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:15.089229   66615 cri.go:89] found id: ""
	I0429 20:07:15.089256   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.089266   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:15.089278   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:15.089298   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:15.143518   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:15.143561   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:15.162742   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:15.162769   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:15.242850   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:15.242872   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:15.242884   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:15.315783   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:15.315825   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:17.863336   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:17.877802   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:17.877869   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:17.935714   66615 cri.go:89] found id: ""
	I0429 20:07:17.935738   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.935746   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:17.935754   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:17.935810   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:17.988496   66615 cri.go:89] found id: ""
	I0429 20:07:17.988529   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.988540   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:17.988547   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:17.988610   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:18.030695   66615 cri.go:89] found id: ""
	I0429 20:07:18.030726   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.030737   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:18.030745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:18.030822   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:18.077452   66615 cri.go:89] found id: ""
	I0429 20:07:18.077481   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.077491   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:18.077498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:18.077561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:18.120102   66615 cri.go:89] found id: ""
	I0429 20:07:18.120127   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.120136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:18.120141   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:18.120200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:18.163440   66615 cri.go:89] found id: ""
	I0429 20:07:18.163469   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.163480   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:18.163487   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:18.163549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:18.202650   66615 cri.go:89] found id: ""
	I0429 20:07:18.202680   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.202693   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:18.202699   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:18.202760   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:18.244378   66615 cri.go:89] found id: ""
	I0429 20:07:18.244408   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.244418   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:18.244429   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:18.244446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:18.289246   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:18.289279   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:18.343382   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:18.343425   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:18.359070   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:18.359103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:18.440316   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:18.440337   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:18.440351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:16.555711   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.555851   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.051297   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:20.551594   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.756941   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:20.756974   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:22.757155   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:21.019552   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:21.036407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:21.036523   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:21.083148   66615 cri.go:89] found id: ""
	I0429 20:07:21.083171   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.083179   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:21.083184   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:21.083231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:21.129382   66615 cri.go:89] found id: ""
	I0429 20:07:21.129415   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.129426   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:21.129434   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:21.129502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:21.172978   66615 cri.go:89] found id: ""
	I0429 20:07:21.173007   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.173015   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:21.173020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:21.173068   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:21.218124   66615 cri.go:89] found id: ""
	I0429 20:07:21.218159   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.218171   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:21.218178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:21.218243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:21.260603   66615 cri.go:89] found id: ""
	I0429 20:07:21.260640   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.260651   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:21.260658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:21.260723   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:21.302351   66615 cri.go:89] found id: ""
	I0429 20:07:21.302386   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.302398   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:21.302407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:21.302498   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:21.347003   66615 cri.go:89] found id: ""
	I0429 20:07:21.347028   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.347037   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:21.347043   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:21.347098   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:21.388202   66615 cri.go:89] found id: ""
	I0429 20:07:21.388236   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.388245   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:21.388257   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:21.388272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:21.442706   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:21.442744   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:21.457453   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:21.457489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:21.539669   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:21.539695   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:21.539707   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:21.625210   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:21.625247   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.173256   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:24.189920   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:24.189990   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:24.236730   66615 cri.go:89] found id: ""
	I0429 20:07:24.236761   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.236772   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:24.236779   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:24.236843   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:24.279031   66615 cri.go:89] found id: ""
	I0429 20:07:24.279055   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.279062   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:24.279067   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:24.279112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:24.321622   66615 cri.go:89] found id: ""
	I0429 20:07:24.321647   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.321657   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:24.321665   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:24.321726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:24.360884   66615 cri.go:89] found id: ""
	I0429 20:07:24.360911   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.360919   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:24.360924   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:24.360983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:24.414439   66615 cri.go:89] found id: ""
	I0429 20:07:24.414463   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.414472   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:24.414477   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:24.414559   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:24.456994   66615 cri.go:89] found id: ""
	I0429 20:07:24.457023   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.457033   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:24.457041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:24.457107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:24.497991   66615 cri.go:89] found id: ""
	I0429 20:07:24.498026   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.498036   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:24.498044   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:24.498137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:24.539375   66615 cri.go:89] found id: ""
	I0429 20:07:24.539415   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.539426   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:24.539438   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:24.539453   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:24.661778   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:24.661804   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:24.661820   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:24.748180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:24.748215   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.795963   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:24.795999   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:24.851485   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:24.851524   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:20.556543   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:22.556775   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:24.559759   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:23.052715   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:25.550857   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.551209   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:25.256195   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.258199   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.367869   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:27.385633   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:27.385716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:27.423181   66615 cri.go:89] found id: ""
	I0429 20:07:27.423210   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.423222   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:27.423233   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:27.423293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:27.467385   66615 cri.go:89] found id: ""
	I0429 20:07:27.467419   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.467432   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:27.467439   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:27.467503   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:27.506171   66615 cri.go:89] found id: ""
	I0429 20:07:27.506204   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.506216   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:27.506223   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:27.506272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:27.545043   66615 cri.go:89] found id: ""
	I0429 20:07:27.545066   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.545074   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:27.545080   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:27.545136   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:27.592279   66615 cri.go:89] found id: ""
	I0429 20:07:27.592306   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.592314   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:27.592320   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:27.592379   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:27.628569   66615 cri.go:89] found id: ""
	I0429 20:07:27.628595   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.628604   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:27.628612   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:27.628659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:27.667937   66615 cri.go:89] found id: ""
	I0429 20:07:27.667967   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.667978   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:27.667985   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:27.668047   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:27.708813   66615 cri.go:89] found id: ""
	I0429 20:07:27.708844   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.708853   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:27.708861   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:27.708876   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:27.789589   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:27.789625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:27.837147   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:27.837180   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:27.891928   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:27.891956   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:27.906162   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:27.906188   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:27.983738   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:27.057372   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:29.555872   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:30.049373   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:32.052279   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:29.758675   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:32.257486   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:30.484404   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:30.503968   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:30.504041   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:30.553070   66615 cri.go:89] found id: ""
	I0429 20:07:30.553099   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.553111   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:30.553118   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:30.553180   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:30.609226   66615 cri.go:89] found id: ""
	I0429 20:07:30.609253   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.609262   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:30.609267   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:30.609324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:30.658359   66615 cri.go:89] found id: ""
	I0429 20:07:30.658384   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.658395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:30.658401   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:30.658459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:30.710024   66615 cri.go:89] found id: ""
	I0429 20:07:30.710048   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.710058   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:30.710114   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:30.710173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:30.752361   66615 cri.go:89] found id: ""
	I0429 20:07:30.752388   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.752398   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:30.752405   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:30.752469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:30.793311   66615 cri.go:89] found id: ""
	I0429 20:07:30.793333   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.793341   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:30.793347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:30.793394   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:30.832371   66615 cri.go:89] found id: ""
	I0429 20:07:30.832400   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.832411   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:30.832417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:30.832469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:30.871183   66615 cri.go:89] found id: ""
	I0429 20:07:30.871215   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.871226   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:30.871237   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:30.871253   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:30.929909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:30.929947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:30.944454   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:30.944482   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:31.022060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:31.022100   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:31.022116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:31.104142   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:31.104185   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:33.651167   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:33.667888   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:33.667948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:33.708455   66615 cri.go:89] found id: ""
	I0429 20:07:33.708484   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.708495   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:33.708502   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:33.708561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:33.747578   66615 cri.go:89] found id: ""
	I0429 20:07:33.747602   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.747611   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:33.747616   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:33.747661   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:33.796005   66615 cri.go:89] found id: ""
	I0429 20:07:33.796036   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.796056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:33.796064   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:33.796128   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:33.836238   66615 cri.go:89] found id: ""
	I0429 20:07:33.836263   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.836271   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:33.836276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:33.836324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:33.877010   66615 cri.go:89] found id: ""
	I0429 20:07:33.877043   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.877056   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:33.877065   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:33.877137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:33.919690   66615 cri.go:89] found id: ""
	I0429 20:07:33.919714   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.919722   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:33.919727   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:33.919797   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:33.959857   66615 cri.go:89] found id: ""
	I0429 20:07:33.959889   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.959900   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:33.959907   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:33.959989   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:33.996349   66615 cri.go:89] found id: ""
	I0429 20:07:33.996376   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.996386   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:33.996396   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:33.996433   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:34.010773   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:34.010808   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:34.091581   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:34.091599   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:34.091611   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:34.173266   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:34.173299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:34.221447   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:34.221479   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:32.055352   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.056364   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.550100   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.550663   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.756264   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.756583   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.776486   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:36.791630   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:36.791764   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:36.837475   66615 cri.go:89] found id: ""
	I0429 20:07:36.837503   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.837513   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:36.837521   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:36.837607   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.879902   66615 cri.go:89] found id: ""
	I0429 20:07:36.879936   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.879947   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:36.879954   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:36.880021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:36.918566   66615 cri.go:89] found id: ""
	I0429 20:07:36.918594   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.918608   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:36.918613   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:36.918659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:36.958876   66615 cri.go:89] found id: ""
	I0429 20:07:36.958937   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.958948   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:36.958959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:36.959008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:36.998790   66615 cri.go:89] found id: ""
	I0429 20:07:36.998820   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.998845   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:36.998864   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:36.998932   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:37.036933   66615 cri.go:89] found id: ""
	I0429 20:07:37.036962   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.036972   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:37.036979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:37.037024   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:37.076560   66615 cri.go:89] found id: ""
	I0429 20:07:37.076597   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.076609   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:37.076616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:37.076688   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:37.118324   66615 cri.go:89] found id: ""
	I0429 20:07:37.118351   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.118360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:37.118368   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:37.118380   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:37.194671   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:37.194714   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:37.236269   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:37.236300   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:37.297006   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:37.297061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:37.312696   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:37.312723   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:37.387132   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:39.888111   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:39.903157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:39.903236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:39.945913   66615 cri.go:89] found id: ""
	I0429 20:07:39.945945   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.945956   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:39.945980   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:39.946076   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.056553   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:38.057230   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:39.050274   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:41.053502   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:38.756717   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:40.762297   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:39.986494   66615 cri.go:89] found id: ""
	I0429 20:07:39.986521   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.986530   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:39.986538   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:39.986598   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:40.031481   66615 cri.go:89] found id: ""
	I0429 20:07:40.031520   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.031531   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:40.031539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:40.031604   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:40.076792   66615 cri.go:89] found id: ""
	I0429 20:07:40.076816   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.076824   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:40.076830   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:40.076877   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:40.121020   66615 cri.go:89] found id: ""
	I0429 20:07:40.121050   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.121061   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:40.121068   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:40.121134   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:40.173189   66615 cri.go:89] found id: ""
	I0429 20:07:40.173221   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.173233   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:40.173241   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:40.173303   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:40.220190   66615 cri.go:89] found id: ""
	I0429 20:07:40.220212   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.220223   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:40.220229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:40.220293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:40.262552   66615 cri.go:89] found id: ""
	I0429 20:07:40.262579   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.262588   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:40.262600   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:40.262616   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:40.322249   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:40.322289   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:40.338703   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:40.338734   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:40.431311   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:40.431333   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:40.431345   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:40.518410   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:40.518446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:43.062556   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:43.077757   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:43.077844   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:43.129247   66615 cri.go:89] found id: ""
	I0429 20:07:43.129277   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.129289   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:43.129296   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:43.129364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:43.173474   66615 cri.go:89] found id: ""
	I0429 20:07:43.173501   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.173509   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:43.173514   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:43.173566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:43.218788   66615 cri.go:89] found id: ""
	I0429 20:07:43.218812   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.218820   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:43.218825   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:43.218873   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:43.259269   66615 cri.go:89] found id: ""
	I0429 20:07:43.259289   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.259297   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:43.259302   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:43.259362   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:43.301152   66615 cri.go:89] found id: ""
	I0429 20:07:43.301180   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.301189   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:43.301195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:43.301244   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:43.338183   66615 cri.go:89] found id: ""
	I0429 20:07:43.338211   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.338222   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:43.338229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:43.338276   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:43.376919   66615 cri.go:89] found id: ""
	I0429 20:07:43.376946   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.376958   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:43.376966   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:43.377032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:43.417421   66615 cri.go:89] found id: ""
	I0429 20:07:43.417450   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.417457   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:43.417465   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:43.417478   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:43.470009   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:43.470040   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:43.486059   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:43.486109   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:43.561688   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:43.561709   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:43.561725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:43.649713   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:43.649750   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:40.555780   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.056758   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.552176   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:46.049393   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.256870   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:45.258520   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:47.757738   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:46.194996   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:46.210261   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:46.210342   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:46.249208   66615 cri.go:89] found id: ""
	I0429 20:07:46.249240   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.249253   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:46.249260   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:46.249336   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:46.287285   66615 cri.go:89] found id: ""
	I0429 20:07:46.287315   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.287328   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:46.287335   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:46.287397   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:46.327944   66615 cri.go:89] found id: ""
	I0429 20:07:46.327976   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.327988   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:46.327996   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:46.328061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:46.373875   66615 cri.go:89] found id: ""
	I0429 20:07:46.373899   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.373908   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:46.373914   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:46.373967   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:46.413748   66615 cri.go:89] found id: ""
	I0429 20:07:46.413774   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.413783   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:46.413789   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:46.413853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:46.459380   66615 cri.go:89] found id: ""
	I0429 20:07:46.459412   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.459424   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:46.459432   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:46.459496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:46.499833   66615 cri.go:89] found id: ""
	I0429 20:07:46.499861   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.499870   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:46.499876   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:46.499939   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:46.541025   66615 cri.go:89] found id: ""
	I0429 20:07:46.541055   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.541068   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:46.541080   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:46.541096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:46.601187   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:46.601224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:46.617399   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:46.617426   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:46.697076   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:46.697113   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:46.697129   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:46.783265   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:46.783303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.335795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:49.350030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:49.350116   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:49.390278   66615 cri.go:89] found id: ""
	I0429 20:07:49.390315   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.390326   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:49.390333   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:49.390388   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:49.431145   66615 cri.go:89] found id: ""
	I0429 20:07:49.431175   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.431186   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:49.431193   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:49.431252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:49.473965   66615 cri.go:89] found id: ""
	I0429 20:07:49.473997   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.474014   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:49.474022   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:49.474105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:49.515372   66615 cri.go:89] found id: ""
	I0429 20:07:49.515407   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.515419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:49.515427   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:49.515487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:49.552541   66615 cri.go:89] found id: ""
	I0429 20:07:49.552567   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.552576   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:49.552582   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:49.552650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:49.599628   66615 cri.go:89] found id: ""
	I0429 20:07:49.599660   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.599672   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:49.599680   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:49.599745   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:49.642705   66615 cri.go:89] found id: ""
	I0429 20:07:49.642741   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.642752   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:49.642759   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:49.642827   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:49.679864   66615 cri.go:89] found id: ""
	I0429 20:07:49.679888   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.679896   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:49.679905   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:49.679919   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:49.765967   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:49.765986   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:49.766010   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:49.852739   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:49.852779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.905586   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:49.905613   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:45.559781   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:48.059952   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:48.049788   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:50.548836   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.551059   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:50.256898   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.757213   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:49.959443   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:49.959474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.476677   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:52.491378   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:52.491458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:52.535801   66615 cri.go:89] found id: ""
	I0429 20:07:52.535827   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.535835   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:52.535841   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:52.535901   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:52.582895   66615 cri.go:89] found id: ""
	I0429 20:07:52.582932   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.582944   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:52.582952   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:52.583022   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:52.627070   66615 cri.go:89] found id: ""
	I0429 20:07:52.627096   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.627113   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:52.627120   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:52.627181   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:52.673312   66615 cri.go:89] found id: ""
	I0429 20:07:52.673339   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.673348   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:52.673353   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:52.673399   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:52.713099   66615 cri.go:89] found id: ""
	I0429 20:07:52.713124   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.713131   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:52.713139   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:52.713205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:52.761982   66615 cri.go:89] found id: ""
	I0429 20:07:52.762007   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.762017   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:52.762024   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:52.762108   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:52.801019   66615 cri.go:89] found id: ""
	I0429 20:07:52.801048   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.801059   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:52.801067   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:52.801141   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:52.842544   66615 cri.go:89] found id: ""
	I0429 20:07:52.842578   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.842602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:52.842613   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:52.842630   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:52.896409   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:52.896442   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.912625   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:52.912650   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:52.992231   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:52.992260   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:52.992276   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:53.077473   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:53.077507   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:50.555818   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.556860   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:54.557161   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:54.554094   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:57.049699   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:55.257406   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:57.257840   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:55.625557   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:55.640211   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:55.640284   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:55.683215   66615 cri.go:89] found id: ""
	I0429 20:07:55.683250   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.683259   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:55.683275   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:55.683341   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:55.730820   66615 cri.go:89] found id: ""
	I0429 20:07:55.730851   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.730862   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:55.730869   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:55.730928   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:55.771784   66615 cri.go:89] found id: ""
	I0429 20:07:55.771808   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.771816   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:55.771821   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:55.771866   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:55.814988   66615 cri.go:89] found id: ""
	I0429 20:07:55.815021   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.815034   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:55.815042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:55.815114   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:55.859293   66615 cri.go:89] found id: ""
	I0429 20:07:55.859327   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.859340   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:55.859349   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:55.859416   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:55.901802   66615 cri.go:89] found id: ""
	I0429 20:07:55.901833   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.901844   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:55.901852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:55.901921   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:55.943863   66615 cri.go:89] found id: ""
	I0429 20:07:55.943895   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.943905   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:55.943913   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:55.943977   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:55.986256   66615 cri.go:89] found id: ""
	I0429 20:07:55.986284   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.986296   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:55.986314   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:55.986332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:56.036710   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:56.036742   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:56.099909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:56.099945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:56.117630   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:56.117660   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:56.197396   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.197421   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:56.197436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:58.779065   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:58.794086   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:58.794168   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:58.844035   66615 cri.go:89] found id: ""
	I0429 20:07:58.844062   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.844070   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:58.844076   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:58.844133   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:58.887859   66615 cri.go:89] found id: ""
	I0429 20:07:58.887889   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.887900   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:58.887906   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:58.887991   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:58.929039   66615 cri.go:89] found id: ""
	I0429 20:07:58.929072   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.929083   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:58.929092   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:58.929152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:58.965930   66615 cri.go:89] found id: ""
	I0429 20:07:58.965975   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.965983   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:58.965989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:58.966061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:59.005583   66615 cri.go:89] found id: ""
	I0429 20:07:59.005616   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.005628   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:59.005638   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:59.005697   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:59.047964   66615 cri.go:89] found id: ""
	I0429 20:07:59.047994   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.048007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:59.048014   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:59.048077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:59.091851   66615 cri.go:89] found id: ""
	I0429 20:07:59.091891   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.091904   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:59.091909   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:59.091978   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:59.134843   66615 cri.go:89] found id: ""
	I0429 20:07:59.134874   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.134881   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:59.134890   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:59.134907   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:59.219048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:59.219084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:59.267404   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:59.267436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:59.322264   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:59.322303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:59.339196   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:59.339235   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:59.441904   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.558660   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.057214   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.054473   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.550825   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.756683   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.759031   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.942998   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:01.957442   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:01.957502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:02.002240   66615 cri.go:89] found id: ""
	I0429 20:08:02.002271   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.002283   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:02.002291   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:02.002353   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:02.051506   66615 cri.go:89] found id: ""
	I0429 20:08:02.051535   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.051546   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:02.051552   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:02.051611   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:02.093194   66615 cri.go:89] found id: ""
	I0429 20:08:02.093234   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.093247   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:02.093254   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:02.093317   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:02.134988   66615 cri.go:89] found id: ""
	I0429 20:08:02.135016   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.135027   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:02.135034   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:02.135099   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:02.182954   66615 cri.go:89] found id: ""
	I0429 20:08:02.182982   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.182993   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:02.183000   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:02.183063   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:02.227778   66615 cri.go:89] found id: ""
	I0429 20:08:02.227807   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.227817   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:02.227826   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:02.227888   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:02.265593   66615 cri.go:89] found id: ""
	I0429 20:08:02.265624   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.265634   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:02.265641   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:02.265701   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:02.306520   66615 cri.go:89] found id: ""
	I0429 20:08:02.306550   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.306558   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:02.306566   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:02.306578   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:02.323806   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:02.323844   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:02.407110   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:02.407140   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:02.407153   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:02.493755   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:02.493791   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:02.538610   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:02.538640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:01.556084   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:03.556487   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:03.551788   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:05.553047   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:04.257831   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:06.756438   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:05.096630   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:05.111112   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:05.111173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:05.151237   66615 cri.go:89] found id: ""
	I0429 20:08:05.151268   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.151279   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:05.151286   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:05.151370   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:05.205344   66615 cri.go:89] found id: ""
	I0429 20:08:05.205379   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.205389   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:05.205396   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:05.205478   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:05.244394   66615 cri.go:89] found id: ""
	I0429 20:08:05.244426   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.244438   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:05.244445   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:05.244504   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:05.285320   66615 cri.go:89] found id: ""
	I0429 20:08:05.285343   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.285350   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:05.285356   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:05.285404   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:05.327618   66615 cri.go:89] found id: ""
	I0429 20:08:05.327645   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.327657   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:05.327664   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:05.327742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:05.369152   66615 cri.go:89] found id: ""
	I0429 20:08:05.369178   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.369194   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:05.369208   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:05.369277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:05.407206   66615 cri.go:89] found id: ""
	I0429 20:08:05.407234   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.407243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:05.407248   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:05.407299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:05.447404   66615 cri.go:89] found id: ""
	I0429 20:08:05.447438   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.447449   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:05.447459   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:05.447475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:05.529660   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:05.529700   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.582510   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:05.582565   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:05.639300   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:05.639351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:05.656825   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:05.656860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:05.730863   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.231635   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:08.247722   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:08.247811   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:08.298354   66615 cri.go:89] found id: ""
	I0429 20:08:08.298382   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.298395   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:08.298401   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:08.298459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:08.339497   66615 cri.go:89] found id: ""
	I0429 20:08:08.339536   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.339549   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:08.339556   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:08.339609   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:08.379665   66615 cri.go:89] found id: ""
	I0429 20:08:08.379695   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.379705   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:08.379712   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:08.379786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:08.419698   66615 cri.go:89] found id: ""
	I0429 20:08:08.419722   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.419732   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:08.419739   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:08.419798   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:08.463901   66615 cri.go:89] found id: ""
	I0429 20:08:08.463935   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.463946   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:08.463953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:08.464028   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:08.504568   66615 cri.go:89] found id: ""
	I0429 20:08:08.504603   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.504617   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:08.504626   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:08.504695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:08.545634   66615 cri.go:89] found id: ""
	I0429 20:08:08.545661   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.545671   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:08.545678   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:08.545741   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:08.586936   66615 cri.go:89] found id: ""
	I0429 20:08:08.586965   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.586976   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:08.586987   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:08.587003   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:08.641755   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:08.641794   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:08.659798   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:08.659845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:08.744265   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.744288   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:08.744303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:08.823813   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:08.823860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.557172   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:07.558538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:10.057841   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:08.049902   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:10.050576   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:12.051331   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:08.757300   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:11.257697   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:11.375600   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:11.396286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:11.396351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:11.442737   66615 cri.go:89] found id: ""
	I0429 20:08:11.442781   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.442789   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:11.442797   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:11.442865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:11.484131   66615 cri.go:89] found id: ""
	I0429 20:08:11.484158   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.484167   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:11.484172   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:11.484231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:11.526647   66615 cri.go:89] found id: ""
	I0429 20:08:11.526684   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.526695   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:11.526705   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:11.526777   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:11.572001   66615 cri.go:89] found id: ""
	I0429 20:08:11.572028   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.572036   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:11.572042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:11.572100   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:11.618980   66615 cri.go:89] found id: ""
	I0429 20:08:11.619003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.619011   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:11.619016   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:11.619077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:11.667079   66615 cri.go:89] found id: ""
	I0429 20:08:11.667107   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.667115   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:11.667123   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:11.667198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:11.707967   66615 cri.go:89] found id: ""
	I0429 20:08:11.708003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.708013   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:11.708020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:11.708073   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:11.753024   66615 cri.go:89] found id: ""
	I0429 20:08:11.753053   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.753062   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:11.753070   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:11.753081   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:11.820171   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:11.820210   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:11.852234   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:11.852263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:11.971060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:11.971085   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:11.971097   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:12.049797   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:12.049845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:14.601181   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:14.621413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:14.621496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:14.677453   66615 cri.go:89] found id: ""
	I0429 20:08:14.677486   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.677498   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:14.677504   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:14.677562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:14.720517   66615 cri.go:89] found id: ""
	I0429 20:08:14.720548   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.720560   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:14.720571   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:14.720636   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:14.770186   66615 cri.go:89] found id: ""
	I0429 20:08:14.770211   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.770219   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:14.770225   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:14.770301   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:14.815286   66615 cri.go:89] found id: ""
	I0429 20:08:14.815310   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.815320   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:14.815327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:14.815389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:14.862625   66615 cri.go:89] found id: ""
	I0429 20:08:14.862651   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.862662   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:14.862669   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:14.862726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:14.910517   66615 cri.go:89] found id: ""
	I0429 20:08:14.910554   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.910565   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:14.910572   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:14.910634   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:14.951085   66615 cri.go:89] found id: ""
	I0429 20:08:14.951110   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.951119   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:14.951124   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:14.951173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:12.558191   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:15.056987   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:14.051423   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:16.051632   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:13.757001   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:16.257425   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:14.991414   66615 cri.go:89] found id: ""
	I0429 20:08:14.991443   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.991455   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:14.991464   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:14.991476   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:15.047551   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:15.047583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:15.063667   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:15.063692   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:15.141744   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:15.141820   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:15.141841   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:15.225676   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:15.225722   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:17.774459   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:17.793137   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:17.793210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:17.856725   66615 cri.go:89] found id: ""
	I0429 20:08:17.856756   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.856767   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:17.856774   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:17.856835   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:17.916510   66615 cri.go:89] found id: ""
	I0429 20:08:17.916542   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.916554   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:17.916561   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:17.916646   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:17.970835   66615 cri.go:89] found id: ""
	I0429 20:08:17.970867   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.970877   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:17.970884   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:17.970948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:18.013324   66615 cri.go:89] found id: ""
	I0429 20:08:18.013353   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.013366   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:18.013384   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:18.013458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:18.062930   66615 cri.go:89] found id: ""
	I0429 20:08:18.062957   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.062968   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:18.062974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:18.063040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:18.111792   66615 cri.go:89] found id: ""
	I0429 20:08:18.111820   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.111829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:18.111834   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:18.111911   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:18.160096   66615 cri.go:89] found id: ""
	I0429 20:08:18.160121   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.160129   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:18.160135   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:18.160198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:18.204012   66615 cri.go:89] found id: ""
	I0429 20:08:18.204044   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.204052   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:18.204062   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:18.204074   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:18.284288   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:18.284337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:18.340746   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:18.340779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:18.397612   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:18.397652   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:18.413425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:18.413455   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:18.493598   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:17.058215   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:19.556308   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:18.551175   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:20.551292   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:22.551637   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:18.757370   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:21.259192   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:20.994339   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:21.010199   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:21.010289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:21.052190   66615 cri.go:89] found id: ""
	I0429 20:08:21.052219   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.052230   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:21.052237   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:21.052300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:21.090838   66615 cri.go:89] found id: ""
	I0429 20:08:21.090870   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.090882   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:21.090889   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:21.090953   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:21.137997   66615 cri.go:89] found id: ""
	I0429 20:08:21.138044   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.138056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:21.138082   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:21.138171   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:21.176278   66615 cri.go:89] found id: ""
	I0429 20:08:21.176311   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.176323   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:21.176331   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:21.176390   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:21.213925   66615 cri.go:89] found id: ""
	I0429 20:08:21.213955   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.213966   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:21.213973   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:21.214039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:21.253815   66615 cri.go:89] found id: ""
	I0429 20:08:21.253842   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.253850   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:21.253857   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:21.253905   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:21.296521   66615 cri.go:89] found id: ""
	I0429 20:08:21.296553   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.296565   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:21.296573   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:21.296633   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:21.337114   66615 cri.go:89] found id: ""
	I0429 20:08:21.337143   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.337150   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:21.337158   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:21.337177   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.384860   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:21.384901   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:21.443837   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:21.443899   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:21.460084   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:21.460116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:21.541230   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:21.541262   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:21.541278   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.132057   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:24.148381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:24.148458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:24.192469   66615 cri.go:89] found id: ""
	I0429 20:08:24.192499   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.192510   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:24.192516   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:24.192568   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:24.232150   66615 cri.go:89] found id: ""
	I0429 20:08:24.232177   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.232188   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:24.232195   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:24.232260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:24.272679   66615 cri.go:89] found id: ""
	I0429 20:08:24.272705   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.272714   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:24.272719   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:24.272772   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:24.317114   66615 cri.go:89] found id: ""
	I0429 20:08:24.317137   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.317145   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:24.317151   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:24.317200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:24.362251   66615 cri.go:89] found id: ""
	I0429 20:08:24.362279   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.362287   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:24.362294   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:24.362346   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:24.405696   66615 cri.go:89] found id: ""
	I0429 20:08:24.405721   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.405729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:24.405734   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:24.405828   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:24.446837   66615 cri.go:89] found id: ""
	I0429 20:08:24.446864   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.446871   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:24.446878   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:24.446929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:24.493416   66615 cri.go:89] found id: ""
	I0429 20:08:24.493445   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.493454   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:24.493462   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:24.493475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:24.555657   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:24.555693   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:24.572297   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:24.572328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:24.658463   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:24.658487   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:24.658499   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.752064   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:24.752103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.557948   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:24.056339   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:25.050530   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:27.554744   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:23.758156   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:26.261403   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:27.303812   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:27.319304   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:27.319373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:27.360473   66615 cri.go:89] found id: ""
	I0429 20:08:27.360509   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.360521   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:27.360529   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:27.360595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:27.404619   66615 cri.go:89] found id: ""
	I0429 20:08:27.404651   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.404668   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:27.404675   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:27.404742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:27.447464   66615 cri.go:89] found id: ""
	I0429 20:08:27.447490   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.447498   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:27.447503   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:27.447556   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:27.489197   66615 cri.go:89] found id: ""
	I0429 20:08:27.489235   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.489246   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:27.489253   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:27.489323   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:27.534354   66615 cri.go:89] found id: ""
	I0429 20:08:27.534387   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.534397   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:27.534404   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:27.534470   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:27.580721   66615 cri.go:89] found id: ""
	I0429 20:08:27.580751   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.580762   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:27.580769   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:27.580841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:27.620000   66615 cri.go:89] found id: ""
	I0429 20:08:27.620033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.620041   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:27.620046   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:27.620096   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:27.659000   66615 cri.go:89] found id: ""
	I0429 20:08:27.659033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.659041   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:27.659050   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:27.659062   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:27.739202   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:27.739241   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:27.784761   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:27.784807   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:27.842707   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:27.842748   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:27.859471   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:27.859498   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:27.942686   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:26.058098   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:28.059648   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.056692   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:32.550893   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:28.757412   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.759070   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.443410   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:30.460332   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:30.460417   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:30.497715   66615 cri.go:89] found id: ""
	I0429 20:08:30.497752   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.497764   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:30.497772   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:30.497841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:30.539376   66615 cri.go:89] found id: ""
	I0429 20:08:30.539409   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.539419   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:30.539426   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:30.539492   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:30.587567   66615 cri.go:89] found id: ""
	I0429 20:08:30.587596   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.587606   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:30.587616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:30.587679   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:30.626198   66615 cri.go:89] found id: ""
	I0429 20:08:30.626228   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.626238   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:30.626246   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:30.626313   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:30.665798   66615 cri.go:89] found id: ""
	I0429 20:08:30.665829   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.665837   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:30.665843   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:30.665909   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:30.708627   66615 cri.go:89] found id: ""
	I0429 20:08:30.708659   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.708671   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:30.708679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:30.708762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:30.754190   66615 cri.go:89] found id: ""
	I0429 20:08:30.754220   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.754230   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:30.754236   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:30.754295   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:30.797383   66615 cri.go:89] found id: ""
	I0429 20:08:30.797410   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.797421   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:30.797432   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:30.797447   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.843485   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:30.843512   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:30.900081   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:30.900118   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:30.916095   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:30.916125   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:30.995509   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:30.995529   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:30.995541   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:33.584596   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:33.600969   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:33.601058   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:33.643935   66615 cri.go:89] found id: ""
	I0429 20:08:33.643967   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.643979   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:33.643986   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:33.644049   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:33.681047   66615 cri.go:89] found id: ""
	I0429 20:08:33.681077   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.681085   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:33.681091   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:33.681160   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:33.726450   66615 cri.go:89] found id: ""
	I0429 20:08:33.726479   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.726490   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:33.726501   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:33.726561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:33.765237   66615 cri.go:89] found id: ""
	I0429 20:08:33.765264   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.765275   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:33.765281   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:33.765339   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:33.808333   66615 cri.go:89] found id: ""
	I0429 20:08:33.808366   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.808376   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:33.808383   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:33.808446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:33.854991   66615 cri.go:89] found id: ""
	I0429 20:08:33.855023   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.855034   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:33.855041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:33.855126   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:33.895405   66615 cri.go:89] found id: ""
	I0429 20:08:33.895434   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.895446   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:33.895455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:33.895521   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:33.937265   66615 cri.go:89] found id: ""
	I0429 20:08:33.937289   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.937297   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:33.937306   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:33.937324   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:33.991565   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:33.991594   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:34.006316   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:34.006343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:34.088734   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:34.088762   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:34.088776   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:34.180451   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:34.180489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.557020   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:33.058354   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:35.049638   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:37.051464   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:33.256955   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:35.257122   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:37.257629   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:36.727080   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:36.743038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:36.743124   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:36.785441   66615 cri.go:89] found id: ""
	I0429 20:08:36.785465   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.785475   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:36.785482   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:36.785542   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:36.828787   66615 cri.go:89] found id: ""
	I0429 20:08:36.828819   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.828829   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:36.828836   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:36.828896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:36.867712   66615 cri.go:89] found id: ""
	I0429 20:08:36.867738   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.867749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:36.867756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:36.867825   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:36.911435   66615 cri.go:89] found id: ""
	I0429 20:08:36.911462   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.911472   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:36.911478   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:36.911560   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:36.953803   66615 cri.go:89] found id: ""
	I0429 20:08:36.953828   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.953836   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:36.953842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:36.953903   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:36.990305   66615 cri.go:89] found id: ""
	I0429 20:08:36.990329   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.990339   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:36.990347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:36.990434   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:37.029177   66615 cri.go:89] found id: ""
	I0429 20:08:37.029206   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.029225   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:37.029232   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:37.029294   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:37.067583   66615 cri.go:89] found id: ""
	I0429 20:08:37.067605   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.067612   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:37.067619   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:37.067631   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:37.144739   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:37.144776   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:37.144788   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:37.227724   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:37.227762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:37.270383   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:37.270417   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:37.326858   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:37.326890   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:39.843323   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:39.859899   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:39.859961   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:39.903125   66615 cri.go:89] found id: ""
	I0429 20:08:39.903155   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.903164   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:39.903169   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:39.903243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:39.944271   66615 cri.go:89] found id: ""
	I0429 20:08:39.944300   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.944309   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:39.944314   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:39.944363   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:35.557115   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:38.056175   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.550339   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:42.048622   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.756355   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:42.255528   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.989934   66615 cri.go:89] found id: ""
	I0429 20:08:39.989964   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.989972   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:39.989978   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:39.990032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:40.025936   66615 cri.go:89] found id: ""
	I0429 20:08:40.025965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.025976   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:40.025983   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:40.026044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:40.065943   66615 cri.go:89] found id: ""
	I0429 20:08:40.065965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.065976   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:40.065984   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:40.066038   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:40.109986   66615 cri.go:89] found id: ""
	I0429 20:08:40.110018   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.110030   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:40.110038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:40.110115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:40.155610   66615 cri.go:89] found id: ""
	I0429 20:08:40.155716   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.155734   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:40.155745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:40.155803   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:40.196213   66615 cri.go:89] found id: ""
	I0429 20:08:40.196239   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.196246   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:40.196256   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:40.196272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:40.280330   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:40.280372   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:40.326774   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:40.326810   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:40.379438   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:40.379475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:40.395332   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:40.395362   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:40.504413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:43.005046   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:43.020464   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:43.020544   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:43.066403   66615 cri.go:89] found id: ""
	I0429 20:08:43.066432   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.066444   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:43.066452   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:43.066548   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:43.109732   66615 cri.go:89] found id: ""
	I0429 20:08:43.109760   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.109771   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:43.109778   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:43.109850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:43.158457   66615 cri.go:89] found id: ""
	I0429 20:08:43.158483   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.158492   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:43.158498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:43.158561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:43.207170   66615 cri.go:89] found id: ""
	I0429 20:08:43.207201   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.207213   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:43.207221   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:43.207281   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:43.246746   66615 cri.go:89] found id: ""
	I0429 20:08:43.246783   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.246804   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:43.246811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:43.246875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:43.292786   66615 cri.go:89] found id: ""
	I0429 20:08:43.292813   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.292824   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:43.292831   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:43.292896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:43.337509   66615 cri.go:89] found id: ""
	I0429 20:08:43.337537   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.337546   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:43.337551   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:43.337601   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:43.378446   66615 cri.go:89] found id: ""
	I0429 20:08:43.378473   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.378481   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:43.378490   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:43.378502   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:43.460438   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:43.460474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:43.503908   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:43.503945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:43.561661   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:43.561699   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:43.577924   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:43.577954   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:43.667006   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:40.555875   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:43.057183   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:44.049342   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.049873   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:44.256458   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.256554   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.168175   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:46.212494   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:46.212579   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:46.251567   66615 cri.go:89] found id: ""
	I0429 20:08:46.251593   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.251603   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:46.251610   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:46.251673   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:46.291913   66615 cri.go:89] found id: ""
	I0429 20:08:46.291943   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.291955   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:46.291962   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:46.292023   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:46.331801   66615 cri.go:89] found id: ""
	I0429 20:08:46.331827   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.331836   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:46.331842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:46.331899   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:46.375956   66615 cri.go:89] found id: ""
	I0429 20:08:46.375989   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.376001   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:46.376008   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:46.376090   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:46.425572   66615 cri.go:89] found id: ""
	I0429 20:08:46.425599   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.425609   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:46.425618   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:46.425681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:46.468161   66615 cri.go:89] found id: ""
	I0429 20:08:46.468226   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.468249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:46.468263   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:46.468433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:46.512163   66615 cri.go:89] found id: ""
	I0429 20:08:46.512193   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.512205   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:46.512212   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:46.512277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:46.556047   66615 cri.go:89] found id: ""
	I0429 20:08:46.556078   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.556088   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:46.556099   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:46.556111   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:46.609886   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:46.609921   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:46.625848   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:46.625878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:46.699005   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:46.699037   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:46.699053   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:46.783886   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:46.783923   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.331288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:49.344805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:49.344864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:49.381576   66615 cri.go:89] found id: ""
	I0429 20:08:49.381598   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.381605   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:49.381619   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:49.381667   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:49.418276   66615 cri.go:89] found id: ""
	I0429 20:08:49.418316   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.418329   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:49.418336   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:49.418389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:49.460147   66615 cri.go:89] found id: ""
	I0429 20:08:49.460177   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.460188   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:49.460195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:49.460253   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:49.500534   66615 cri.go:89] found id: ""
	I0429 20:08:49.500562   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.500569   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:49.500575   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:49.500632   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:49.538481   66615 cri.go:89] found id: ""
	I0429 20:08:49.538521   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.538534   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:49.538541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:49.538603   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:49.580192   66615 cri.go:89] found id: ""
	I0429 20:08:49.580218   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.580228   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:49.580234   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:49.580299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:49.616400   66615 cri.go:89] found id: ""
	I0429 20:08:49.616427   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.616437   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:49.616444   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:49.616551   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:49.652871   66615 cri.go:89] found id: ""
	I0429 20:08:49.652900   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.652918   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:49.652931   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:49.652947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:49.728173   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:49.728200   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:49.728212   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:49.813701   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:49.813749   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.855685   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:49.855712   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:49.906480   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:49.906514   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:45.559939   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.056008   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.056054   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.052578   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.550638   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.550910   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.257460   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.259418   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.757365   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.422430   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:52.437412   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:52.437488   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:52.476896   66615 cri.go:89] found id: ""
	I0429 20:08:52.476919   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.476927   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:52.476932   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:52.476976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:52.517266   66615 cri.go:89] found id: ""
	I0429 20:08:52.517298   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.517310   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:52.517318   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:52.517381   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:52.560886   66615 cri.go:89] found id: ""
	I0429 20:08:52.560909   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.560917   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:52.560922   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:52.560969   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:52.601362   66615 cri.go:89] found id: ""
	I0429 20:08:52.601398   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.601419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:52.601429   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:52.601506   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:52.639544   66615 cri.go:89] found id: ""
	I0429 20:08:52.639580   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.639591   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:52.639599   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:52.639652   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:52.681088   66615 cri.go:89] found id: ""
	I0429 20:08:52.681120   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.681130   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:52.681138   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:52.681204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:52.721777   66615 cri.go:89] found id: ""
	I0429 20:08:52.721802   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.721820   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:52.721828   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:52.721900   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:52.762823   66615 cri.go:89] found id: ""
	I0429 20:08:52.762845   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.762856   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:52.762863   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:52.762875   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:52.819291   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:52.819326   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:52.847120   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:52.847165   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:52.956274   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:52.956301   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:52.956317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:53.041636   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:53.041676   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:52.056558   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:54.555745   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.051656   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:57.549668   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.257083   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:57.757855   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.592636   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:55.607372   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:55.607449   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:55.643959   66615 cri.go:89] found id: ""
	I0429 20:08:55.643991   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.644000   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:55.644005   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:55.644061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:55.682272   66615 cri.go:89] found id: ""
	I0429 20:08:55.682304   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.682315   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:55.682323   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:55.682384   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:55.720157   66615 cri.go:89] found id: ""
	I0429 20:08:55.720189   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.720200   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:55.720207   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:55.720272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:55.761748   66615 cri.go:89] found id: ""
	I0429 20:08:55.761773   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.761781   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:55.761786   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:55.761842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:55.802377   66615 cri.go:89] found id: ""
	I0429 20:08:55.802405   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.802416   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:55.802423   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:55.802494   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:55.838986   66615 cri.go:89] found id: ""
	I0429 20:08:55.839016   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.839024   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:55.839030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:55.839077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:55.874991   66615 cri.go:89] found id: ""
	I0429 20:08:55.875022   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.875032   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:55.875039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:55.875106   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:55.913561   66615 cri.go:89] found id: ""
	I0429 20:08:55.913595   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.913607   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:55.913618   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:55.913633   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:55.965355   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:55.965391   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:55.981222   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:55.981259   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:56.056656   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:56.056685   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:56.056701   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.135276   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:56.135309   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:58.682855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:58.701679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:58.701769   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:58.760807   66615 cri.go:89] found id: ""
	I0429 20:08:58.760828   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.760841   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:58.760858   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:58.760910   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:58.835167   66615 cri.go:89] found id: ""
	I0429 20:08:58.835204   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.835216   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:58.835223   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:58.835289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:58.877367   66615 cri.go:89] found id: ""
	I0429 20:08:58.877398   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.877409   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:58.877417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:58.877483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:58.923726   66615 cri.go:89] found id: ""
	I0429 20:08:58.923751   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.923760   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:58.923766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:58.923817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:58.967780   66615 cri.go:89] found id: ""
	I0429 20:08:58.967804   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.967811   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:58.967816   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:58.967865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:59.010646   66615 cri.go:89] found id: ""
	I0429 20:08:59.010682   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.010690   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:59.010697   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:59.010759   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:59.057380   66615 cri.go:89] found id: ""
	I0429 20:08:59.057408   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.057418   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:59.057426   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:59.057483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:59.099669   66615 cri.go:89] found id: ""
	I0429 20:08:59.099698   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.099706   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:59.099715   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:59.099731   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:59.146831   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:59.146861   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:59.204232   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:59.204274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:59.219799   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:59.219824   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:59.305438   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:59.305465   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:59.305481   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.555976   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:58.557892   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:00.049511   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:02.050709   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:00.256064   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:02.257053   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:01.885861   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:01.900746   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:01.900808   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:01.942174   66615 cri.go:89] found id: ""
	I0429 20:09:01.942210   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.942218   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:01.942224   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:01.942285   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:01.986463   66615 cri.go:89] found id: ""
	I0429 20:09:01.986491   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.986502   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:01.986509   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:01.986570   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:02.026290   66615 cri.go:89] found id: ""
	I0429 20:09:02.026314   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.026321   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:02.026327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:02.026375   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:02.064239   66615 cri.go:89] found id: ""
	I0429 20:09:02.064259   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.064266   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:02.064271   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:02.064321   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:02.105807   66615 cri.go:89] found id: ""
	I0429 20:09:02.105838   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.105857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:02.105866   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:02.105926   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:02.144939   66615 cri.go:89] found id: ""
	I0429 20:09:02.144962   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.144970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:02.144975   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:02.145037   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:02.192866   66615 cri.go:89] found id: ""
	I0429 20:09:02.192891   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.192899   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:02.192905   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:02.192955   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:02.232485   66615 cri.go:89] found id: ""
	I0429 20:09:02.232515   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.232524   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:02.232533   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:02.232550   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:02.287374   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:02.287402   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:02.302979   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:02.303009   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:02.380693   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:02.380713   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:02.380725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:02.467048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:02.467084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:01.055311   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:03.055538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:05.056325   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:04.051014   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:06.556497   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:04.758329   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:07.256328   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:05.018176   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:05.033178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:05.033238   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:05.079008   66615 cri.go:89] found id: ""
	I0429 20:09:05.079034   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.079043   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:05.079050   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:05.079113   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:05.118620   66615 cri.go:89] found id: ""
	I0429 20:09:05.118642   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.118650   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:05.118655   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:05.118714   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:05.159603   66615 cri.go:89] found id: ""
	I0429 20:09:05.159646   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.159660   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:05.159666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:05.159733   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:05.200224   66615 cri.go:89] found id: ""
	I0429 20:09:05.200252   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.200262   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:05.200270   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:05.200344   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:05.246341   66615 cri.go:89] found id: ""
	I0429 20:09:05.246384   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.246396   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:05.246403   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:05.246471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:05.286126   66615 cri.go:89] found id: ""
	I0429 20:09:05.286153   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.286163   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:05.286171   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:05.286235   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:05.326911   66615 cri.go:89] found id: ""
	I0429 20:09:05.326941   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.326952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:05.326958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:05.327019   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:05.365564   66615 cri.go:89] found id: ""
	I0429 20:09:05.365592   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.365602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:05.365621   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:05.365637   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:05.445857   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:05.445877   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:05.445889   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:05.530129   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:05.530164   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:05.573936   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:05.573971   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:05.631263   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:05.631299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.147288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:08.162949   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:08.163021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:08.203009   66615 cri.go:89] found id: ""
	I0429 20:09:08.203033   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.203041   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:08.203047   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:08.203112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:08.241708   66615 cri.go:89] found id: ""
	I0429 20:09:08.241735   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.241744   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:08.241750   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:08.241801   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:08.283976   66615 cri.go:89] found id: ""
	I0429 20:09:08.284005   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.284017   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:08.284023   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:08.284091   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:08.323909   66615 cri.go:89] found id: ""
	I0429 20:09:08.323939   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.323951   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:08.323962   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:08.324031   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:08.363236   66615 cri.go:89] found id: ""
	I0429 20:09:08.363263   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.363271   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:08.363276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:08.363328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:08.401767   66615 cri.go:89] found id: ""
	I0429 20:09:08.401790   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.401798   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:08.401803   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:08.401851   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:08.443678   66615 cri.go:89] found id: ""
	I0429 20:09:08.443709   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.443726   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:08.443731   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:08.443791   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:08.489025   66615 cri.go:89] found id: ""
	I0429 20:09:08.489069   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.489103   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:08.489129   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:08.489163   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:08.543421   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:08.543462   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.560425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:08.560459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:08.642819   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:08.642840   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:08.642855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:08.726644   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:08.726682   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:07.555523   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.556138   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.049664   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.050246   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.256452   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.257458   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.277817   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:11.292340   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:11.292420   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:11.330721   66615 cri.go:89] found id: ""
	I0429 20:09:11.330756   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.330768   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:11.330776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:11.330850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:11.372057   66615 cri.go:89] found id: ""
	I0429 20:09:11.372089   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.372098   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:11.372103   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:11.372155   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:11.414786   66615 cri.go:89] found id: ""
	I0429 20:09:11.414814   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.414825   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:11.414832   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:11.414898   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:11.454934   66615 cri.go:89] found id: ""
	I0429 20:09:11.454961   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.454969   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:11.454974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:11.455039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:11.494169   66615 cri.go:89] found id: ""
	I0429 20:09:11.494200   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.494211   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:11.494217   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:11.494277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:11.541646   66615 cri.go:89] found id: ""
	I0429 20:09:11.541684   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.541694   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:11.541701   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:11.541766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:11.584025   66615 cri.go:89] found id: ""
	I0429 20:09:11.584055   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.584067   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:11.584075   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:11.584138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:11.622425   66615 cri.go:89] found id: ""
	I0429 20:09:11.622459   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.622471   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:11.622481   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:11.622493   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:11.676416   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:11.676450   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:11.693793   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:11.693822   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:11.771410   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:11.771437   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:11.771454   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:11.854969   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:11.855047   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:14.398871   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:14.415894   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:14.415983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:14.454718   66615 cri.go:89] found id: ""
	I0429 20:09:14.454752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.454763   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:14.454773   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:14.454836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:14.498562   66615 cri.go:89] found id: ""
	I0429 20:09:14.498591   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.498602   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:14.498609   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:14.498669   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:14.536357   66615 cri.go:89] found id: ""
	I0429 20:09:14.536384   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.536395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:14.536402   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:14.536460   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:14.577240   66615 cri.go:89] found id: ""
	I0429 20:09:14.577274   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.577284   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:14.577291   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:14.577372   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:14.617231   66615 cri.go:89] found id: ""
	I0429 20:09:14.617266   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.617279   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:14.617287   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:14.617355   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:14.659053   66615 cri.go:89] found id: ""
	I0429 20:09:14.659081   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.659090   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:14.659096   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:14.659145   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:14.708723   66615 cri.go:89] found id: ""
	I0429 20:09:14.708752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.708760   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:14.708766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:14.708814   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:14.753732   66615 cri.go:89] found id: ""
	I0429 20:09:14.753762   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.753773   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:14.753783   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:14.753798   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:14.771952   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:14.771985   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:14.842649   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:14.842680   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:14.842696   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:14.925565   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:14.925603   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:11.556903   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:14.057196   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:13.550999   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:16.054439   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:13.257735   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:15.756651   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:17.756760   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:14.975731   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:14.975765   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.528872   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:17.544373   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:17.544455   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:17.582977   66615 cri.go:89] found id: ""
	I0429 20:09:17.583001   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.583009   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:17.583014   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:17.583079   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:17.620322   66615 cri.go:89] found id: ""
	I0429 20:09:17.620352   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.620368   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:17.620373   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:17.620421   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:17.664339   66615 cri.go:89] found id: ""
	I0429 20:09:17.664367   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.664375   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:17.664381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:17.664433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:17.705150   66615 cri.go:89] found id: ""
	I0429 20:09:17.705175   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.705184   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:17.705189   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:17.705239   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:17.749713   66615 cri.go:89] found id: ""
	I0429 20:09:17.749738   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.749747   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:17.749752   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:17.749850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:17.791528   66615 cri.go:89] found id: ""
	I0429 20:09:17.791552   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.791560   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:17.791566   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:17.791615   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:17.834994   66615 cri.go:89] found id: ""
	I0429 20:09:17.835024   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.835035   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:17.835050   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:17.835107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:17.872194   66615 cri.go:89] found id: ""
	I0429 20:09:17.872226   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.872236   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:17.872248   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:17.872263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.926899   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:17.926936   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:17.944184   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:17.944218   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:18.029224   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:18.029246   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:18.029258   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:18.111112   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:18.111147   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:16.557282   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:19.056682   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:18.549106   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:20.550026   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:19.758897   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:22.257104   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:20.655965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:20.671420   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:20.671487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:20.710100   66615 cri.go:89] found id: ""
	I0429 20:09:20.710132   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.710144   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:20.710151   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:20.710221   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:20.748849   66615 cri.go:89] found id: ""
	I0429 20:09:20.748877   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.748888   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:20.748894   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:20.748956   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:20.788113   66615 cri.go:89] found id: ""
	I0429 20:09:20.788140   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.788151   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:20.788157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:20.788217   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:20.831432   66615 cri.go:89] found id: ""
	I0429 20:09:20.831455   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.831462   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:20.831470   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:20.831518   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:20.878156   66615 cri.go:89] found id: ""
	I0429 20:09:20.878183   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.878191   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:20.878197   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:20.878262   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:20.920691   66615 cri.go:89] found id: ""
	I0429 20:09:20.920718   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.920729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:20.920735   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:20.920795   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:20.960674   66615 cri.go:89] found id: ""
	I0429 20:09:20.960709   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.960719   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:20.960726   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:20.960786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:21.006462   66615 cri.go:89] found id: ""
	I0429 20:09:21.006486   66615 logs.go:276] 0 containers: []
	W0429 20:09:21.006495   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:21.006503   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:21.006518   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.060040   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:21.060076   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:21.077141   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:21.077171   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:21.157058   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:21.157083   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:21.157096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:21.265626   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:21.265662   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:23.813718   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:23.828338   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:23.828400   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:23.868730   66615 cri.go:89] found id: ""
	I0429 20:09:23.868760   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.868771   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:23.868776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:23.868842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:23.907919   66615 cri.go:89] found id: ""
	I0429 20:09:23.907941   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.907949   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:23.907956   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:23.908011   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:23.956769   66615 cri.go:89] found id: ""
	I0429 20:09:23.956794   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.956805   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:23.956811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:23.956875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:23.998578   66615 cri.go:89] found id: ""
	I0429 20:09:23.998612   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.998621   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:23.998628   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:23.998681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:24.037458   66615 cri.go:89] found id: ""
	I0429 20:09:24.037485   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.037492   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:24.037499   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:24.037562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:24.078305   66615 cri.go:89] found id: ""
	I0429 20:09:24.078336   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.078351   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:24.078358   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:24.078418   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:24.120100   66615 cri.go:89] found id: ""
	I0429 20:09:24.120129   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.120139   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:24.120147   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:24.120211   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:24.160953   66615 cri.go:89] found id: ""
	I0429 20:09:24.160988   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.161000   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:24.161012   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:24.161029   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:24.176654   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:24.176686   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:24.256631   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:24.256652   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:24.256668   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:24.335379   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:24.335424   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:24.379616   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:24.379649   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.556726   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:24.057483   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:23.050004   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:25.550882   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:27.551051   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:24.257726   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:26.757098   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:26.937283   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:26.956185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:26.956252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:26.997000   66615 cri.go:89] found id: ""
	I0429 20:09:26.997034   66615 logs.go:276] 0 containers: []
	W0429 20:09:26.997046   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:26.997053   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:26.997115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:27.042494   66615 cri.go:89] found id: ""
	I0429 20:09:27.042527   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.042538   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:27.042546   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:27.042608   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:27.086170   66615 cri.go:89] found id: ""
	I0429 20:09:27.086199   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.086211   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:27.086218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:27.086282   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:27.126502   66615 cri.go:89] found id: ""
	I0429 20:09:27.126531   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.126542   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:27.126560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:27.126635   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:27.175102   66615 cri.go:89] found id: ""
	I0429 20:09:27.175134   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.175142   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:27.175148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:27.175216   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:27.215983   66615 cri.go:89] found id: ""
	I0429 20:09:27.216013   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.216025   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:27.216033   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:27.216097   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:27.256427   66615 cri.go:89] found id: ""
	I0429 20:09:27.256456   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.256467   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:27.256474   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:27.256540   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:27.298444   66615 cri.go:89] found id: ""
	I0429 20:09:27.298479   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.298490   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:27.298501   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:27.298517   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:27.381579   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:27.381625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:27.429304   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:27.429350   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:27.483044   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:27.483082   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:27.500304   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:27.500332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:27.583909   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:26.555285   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:28.560544   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:30.049769   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:32.050537   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:29.256689   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:31.257554   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:30.084904   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:30.102417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:30.102486   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:30.146726   66615 cri.go:89] found id: ""
	I0429 20:09:30.146748   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.146755   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:30.146761   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:30.146809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:30.190739   66615 cri.go:89] found id: ""
	I0429 20:09:30.190768   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.190780   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:30.190788   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:30.190853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:30.228836   66615 cri.go:89] found id: ""
	I0429 20:09:30.228864   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.228879   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:30.228887   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:30.228951   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:30.270876   66615 cri.go:89] found id: ""
	I0429 20:09:30.270912   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.270920   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:30.270925   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:30.270995   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:30.310762   66615 cri.go:89] found id: ""
	I0429 20:09:30.310787   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.310795   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:30.310801   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:30.310850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:30.356339   66615 cri.go:89] found id: ""
	I0429 20:09:30.356363   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.356371   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:30.356376   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:30.356430   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:30.395540   66615 cri.go:89] found id: ""
	I0429 20:09:30.395575   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.395589   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:30.395598   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:30.395671   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:30.446237   66615 cri.go:89] found id: ""
	I0429 20:09:30.446263   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.446276   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:30.446286   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:30.446301   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:30.537309   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:30.537334   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:30.537349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:30.629116   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:30.629151   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:30.683308   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:30.683337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:30.735879   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:30.735910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.252322   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:33.268276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:33.268351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:33.309531   66615 cri.go:89] found id: ""
	I0429 20:09:33.309622   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.309641   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:33.309650   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:33.309719   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:33.367480   66615 cri.go:89] found id: ""
	I0429 20:09:33.367515   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.367527   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:33.367535   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:33.367595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:33.433717   66615 cri.go:89] found id: ""
	I0429 20:09:33.433742   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.433751   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:33.433756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:33.433820   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:33.484053   66615 cri.go:89] found id: ""
	I0429 20:09:33.484081   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.484093   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:33.484100   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:33.484165   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:33.524103   66615 cri.go:89] found id: ""
	I0429 20:09:33.524126   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.524136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:33.524143   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:33.524204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:33.565692   66615 cri.go:89] found id: ""
	I0429 20:09:33.565711   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.565719   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:33.565724   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:33.565784   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:33.607119   66615 cri.go:89] found id: ""
	I0429 20:09:33.607143   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.607153   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:33.607160   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:33.607225   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:33.648407   66615 cri.go:89] found id: ""
	I0429 20:09:33.648432   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.648440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:33.648449   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:33.648463   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:33.730744   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:33.730781   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:33.774295   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:33.774328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:33.829609   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:33.829653   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.846048   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:33.846092   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:33.924413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:31.056307   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:33.056538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:34.548872   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.550765   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:33.758571   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.257361   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.425072   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:36.440185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:36.440268   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:36.484364   66615 cri.go:89] found id: ""
	I0429 20:09:36.484386   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.484394   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:36.484400   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:36.484450   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:36.520436   66615 cri.go:89] found id: ""
	I0429 20:09:36.520466   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.520478   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:36.520487   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:36.520549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:36.563597   66615 cri.go:89] found id: ""
	I0429 20:09:36.563622   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.563630   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:36.563635   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:36.563704   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:36.613106   66615 cri.go:89] found id: ""
	I0429 20:09:36.613134   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.613143   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:36.613148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:36.613204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:36.658127   66615 cri.go:89] found id: ""
	I0429 20:09:36.658151   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.658159   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:36.658166   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:36.658229   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:36.707388   66615 cri.go:89] found id: ""
	I0429 20:09:36.707415   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.707423   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:36.707430   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:36.707479   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:36.753363   66615 cri.go:89] found id: ""
	I0429 20:09:36.753394   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.753405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:36.753413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:36.753475   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:36.801492   66615 cri.go:89] found id: ""
	I0429 20:09:36.801513   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.801521   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:36.801530   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:36.801542   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:36.857055   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:36.857108   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:36.874567   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:36.874595   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:36.956176   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:36.956202   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:36.956217   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:37.039958   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:37.039997   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:39.591442   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:39.607842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:39.607927   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:39.651917   66615 cri.go:89] found id: ""
	I0429 20:09:39.651941   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.651948   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:39.651955   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:39.652020   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:39.690032   66615 cri.go:89] found id: ""
	I0429 20:09:39.690059   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.690078   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:39.690086   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:39.690152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:39.733176   66615 cri.go:89] found id: ""
	I0429 20:09:39.733200   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.733209   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:39.733215   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:39.733261   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:39.779528   66615 cri.go:89] found id: ""
	I0429 20:09:39.779560   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.779572   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:39.779581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:39.779650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:39.822408   66615 cri.go:89] found id: ""
	I0429 20:09:39.822436   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.822445   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:39.822452   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:39.822522   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:39.864895   66615 cri.go:89] found id: ""
	I0429 20:09:39.864922   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.864930   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:39.864938   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:39.865008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:39.907498   66615 cri.go:89] found id: ""
	I0429 20:09:39.907523   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.907533   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:39.907539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:39.907606   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:39.948400   66615 cri.go:89] found id: ""
	I0429 20:09:39.948430   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.948440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:39.948449   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:39.948465   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:35.557262   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:38.056877   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:40.058568   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:39.049938   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:41.050139   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:38.756883   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:41.256775   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:39.964733   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:39.964763   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:40.043568   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:40.043593   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:40.043609   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:40.130776   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:40.130815   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:40.182011   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:40.182042   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:42.739068   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:42.756144   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:42.756286   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:42.798776   66615 cri.go:89] found id: ""
	I0429 20:09:42.798801   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.798810   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:42.798815   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:42.798861   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:42.837122   66615 cri.go:89] found id: ""
	I0429 20:09:42.837146   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.837154   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:42.837159   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:42.837205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:42.875435   66615 cri.go:89] found id: ""
	I0429 20:09:42.875461   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.875471   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:42.875479   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:42.875536   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:42.920044   66615 cri.go:89] found id: ""
	I0429 20:09:42.920076   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.920087   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:42.920094   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:42.920175   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:42.960122   66615 cri.go:89] found id: ""
	I0429 20:09:42.960152   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.960163   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:42.960169   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:42.960215   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:42.999784   66615 cri.go:89] found id: ""
	I0429 20:09:42.999811   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.999829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:42.999837   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:42.999917   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:43.040882   66615 cri.go:89] found id: ""
	I0429 20:09:43.040930   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.040952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:43.040959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:43.041044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:43.082596   66615 cri.go:89] found id: ""
	I0429 20:09:43.082627   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.082639   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:43.082650   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:43.082672   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:43.140302   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:43.140343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:43.157508   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:43.157547   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:43.241025   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:43.241047   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:43.241061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:43.325820   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:43.325855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:42.058727   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:44.556415   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:43.051020   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.550017   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:43.258400   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.756441   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:47.757029   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.871561   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:45.887323   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:45.887398   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:45.930021   66615 cri.go:89] found id: ""
	I0429 20:09:45.930050   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.930062   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:45.930088   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:45.930148   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:45.971404   66615 cri.go:89] found id: ""
	I0429 20:09:45.971434   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.971445   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:45.971452   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:45.971513   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:46.018801   66615 cri.go:89] found id: ""
	I0429 20:09:46.018825   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.018833   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:46.018838   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:46.018886   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:46.065118   66615 cri.go:89] found id: ""
	I0429 20:09:46.065140   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.065148   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:46.065153   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:46.065201   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:46.105244   66615 cri.go:89] found id: ""
	I0429 20:09:46.105271   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.105294   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:46.105309   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:46.105373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:46.153736   66615 cri.go:89] found id: ""
	I0429 20:09:46.153759   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.153768   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:46.153773   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:46.153836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:46.198940   66615 cri.go:89] found id: ""
	I0429 20:09:46.198965   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.198973   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:46.198979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:46.199064   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:46.238001   66615 cri.go:89] found id: ""
	I0429 20:09:46.238031   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.238044   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:46.238056   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:46.238087   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:46.292309   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:46.292357   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:46.307243   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:46.307274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:46.386832   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.386852   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:46.386869   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:46.468856   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:46.468891   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.017354   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:49.032753   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:49.032832   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:49.075345   66615 cri.go:89] found id: ""
	I0429 20:09:49.075375   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.075388   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:49.075394   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:49.075447   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:49.115294   66615 cri.go:89] found id: ""
	I0429 20:09:49.115328   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.115339   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:49.115347   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:49.115412   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:49.164115   66615 cri.go:89] found id: ""
	I0429 20:09:49.164140   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.164148   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:49.164154   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:49.164210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:49.207643   66615 cri.go:89] found id: ""
	I0429 20:09:49.207668   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.207679   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:49.207698   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:49.207762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:49.247121   66615 cri.go:89] found id: ""
	I0429 20:09:49.247147   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.247156   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:49.247162   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:49.247220   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:49.288594   66615 cri.go:89] found id: ""
	I0429 20:09:49.288626   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.288636   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:49.288643   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:49.288711   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:49.330243   66615 cri.go:89] found id: ""
	I0429 20:09:49.330273   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.330290   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:49.330300   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:49.330365   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:49.371304   66615 cri.go:89] found id: ""
	I0429 20:09:49.371348   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.371360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:49.371372   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:49.371392   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:49.450910   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:49.450949   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.494940   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:49.494970   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:49.553320   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:49.553364   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:49.568850   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:49.568878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:49.644932   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.559246   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:49.056790   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:48.050285   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:50.050579   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.549882   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:49.757113   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.258680   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.145702   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:52.162681   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:52.162756   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:52.204816   66615 cri.go:89] found id: ""
	I0429 20:09:52.204858   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.204870   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:52.204888   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:52.204963   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:52.248481   66615 cri.go:89] found id: ""
	I0429 20:09:52.248510   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.248519   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:52.248525   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:52.248596   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:52.289158   66615 cri.go:89] found id: ""
	I0429 20:09:52.289186   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.289194   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:52.289200   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:52.289260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:52.329905   66615 cri.go:89] found id: ""
	I0429 20:09:52.329931   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.329942   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:52.329950   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:52.330025   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:52.372523   66615 cri.go:89] found id: ""
	I0429 20:09:52.372546   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.372554   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:52.372560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:52.372623   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:52.414936   66615 cri.go:89] found id: ""
	I0429 20:09:52.414970   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.414982   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:52.414989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:52.415056   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:52.454139   66615 cri.go:89] found id: ""
	I0429 20:09:52.454164   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.454172   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:52.454178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:52.454236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:52.494093   66615 cri.go:89] found id: ""
	I0429 20:09:52.494129   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.494142   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:52.494155   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:52.494195   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:52.552104   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:52.552142   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:52.568430   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:52.568459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:52.649708   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:52.649736   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:52.649752   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:52.746231   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:52.746272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:51.057536   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:53.556862   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:55.049835   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:57.050606   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:54.759308   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:57.256396   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:55.296228   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:55.311257   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:55.311328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:55.352071   66615 cri.go:89] found id: ""
	I0429 20:09:55.352098   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.352109   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:55.352116   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:55.352177   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:55.399806   66615 cri.go:89] found id: ""
	I0429 20:09:55.399837   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.399847   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:55.399860   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:55.399947   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:55.444372   66615 cri.go:89] found id: ""
	I0429 20:09:55.444398   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.444406   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:55.444411   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:55.444468   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:55.485542   66615 cri.go:89] found id: ""
	I0429 20:09:55.485568   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.485579   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:55.485586   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:55.485670   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:55.535452   66615 cri.go:89] found id: ""
	I0429 20:09:55.535483   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.535494   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:55.535502   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:55.535566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:55.578009   66615 cri.go:89] found id: ""
	I0429 20:09:55.578036   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.578048   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:55.578056   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:55.578138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:55.618302   66615 cri.go:89] found id: ""
	I0429 20:09:55.618336   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.618347   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:55.618355   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:55.618419   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:55.660489   66615 cri.go:89] found id: ""
	I0429 20:09:55.660518   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.660526   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:55.660535   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:55.660548   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:55.713953   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:55.713993   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:55.729624   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:55.729656   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:55.813718   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:55.813746   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:55.813762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:55.898805   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:55.898849   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:58.467014   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:58.482852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:58.482925   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:58.522862   66615 cri.go:89] found id: ""
	I0429 20:09:58.522896   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.522908   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:58.522916   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:58.523000   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:58.568234   66615 cri.go:89] found id: ""
	I0429 20:09:58.568259   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.568266   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:58.568272   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:58.568327   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:58.609147   66615 cri.go:89] found id: ""
	I0429 20:09:58.609175   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.609185   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:58.609192   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:58.609265   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:58.657074   66615 cri.go:89] found id: ""
	I0429 20:09:58.657104   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.657115   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:58.657122   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:58.657186   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:58.706819   66615 cri.go:89] found id: ""
	I0429 20:09:58.706846   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.706857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:58.706865   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:58.706929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:58.754967   66615 cri.go:89] found id: ""
	I0429 20:09:58.754998   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.755007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:58.755018   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:58.755078   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:58.793657   66615 cri.go:89] found id: ""
	I0429 20:09:58.793694   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.793704   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:58.793709   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:58.793766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:58.832023   66615 cri.go:89] found id: ""
	I0429 20:09:58.832055   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.832066   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:58.832078   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:58.832094   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:58.886568   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:58.886605   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:58.902126   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:58.902154   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:58.986786   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:58.986814   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:58.986831   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:59.072258   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:59.072296   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:55.557245   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:58.056570   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:59.549825   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:02.050651   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:59.756493   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:01.756935   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:01.620172   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:01.636958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:01.637055   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:01.703865   66615 cri.go:89] found id: ""
	I0429 20:10:01.703890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.703899   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:01.703905   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:01.703950   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:01.742655   66615 cri.go:89] found id: ""
	I0429 20:10:01.742684   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.742692   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:01.742707   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:01.742778   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:01.782866   66615 cri.go:89] found id: ""
	I0429 20:10:01.782890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.782901   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:01.782908   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:01.782964   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:01.822958   66615 cri.go:89] found id: ""
	I0429 20:10:01.822984   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.822992   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:01.822997   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:01.823044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:01.868581   66615 cri.go:89] found id: ""
	I0429 20:10:01.868604   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.868612   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:01.868622   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:01.868675   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:01.908216   66615 cri.go:89] found id: ""
	I0429 20:10:01.908241   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.908249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:01.908255   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:01.908328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:01.953100   66615 cri.go:89] found id: ""
	I0429 20:10:01.953131   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.953142   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:01.953150   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:01.953213   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:01.999940   66615 cri.go:89] found id: ""
	I0429 20:10:01.999974   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.999988   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:01.999999   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:02.000012   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:02.061669   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:02.061704   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:02.077609   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:02.077640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:02.169643   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:02.169666   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:02.169679   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:02.250615   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:02.250657   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:04.803629   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:04.819286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:04.819364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:04.860501   66615 cri.go:89] found id: ""
	I0429 20:10:04.860530   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.860541   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:04.860548   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:04.860672   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:04.898444   66615 cri.go:89] found id: ""
	I0429 20:10:04.898472   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.898480   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:04.898486   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:04.898546   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:04.936569   66615 cri.go:89] found id: ""
	I0429 20:10:04.936599   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.936609   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:04.936617   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:04.936695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:00.556325   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:02.557754   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:05.058245   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:04.551711   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:07.050327   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:03.757096   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:06.257529   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:04.979667   66615 cri.go:89] found id: ""
	I0429 20:10:04.979696   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.979708   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:04.979715   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:04.979768   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:05.019608   66615 cri.go:89] found id: ""
	I0429 20:10:05.019638   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.019650   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:05.019658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:05.019724   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:05.063723   66615 cri.go:89] found id: ""
	I0429 20:10:05.063749   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.063758   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:05.063765   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:05.063821   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:05.106676   66615 cri.go:89] found id: ""
	I0429 20:10:05.106704   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.106714   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:05.106721   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:05.106783   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:05.147652   66615 cri.go:89] found id: ""
	I0429 20:10:05.147683   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.147693   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:05.147704   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:05.147721   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:05.189048   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:05.189085   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:05.248635   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:05.248669   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:05.265791   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:05.265826   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:05.343190   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:05.343217   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:05.343234   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:07.926868   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:07.942581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:07.942656   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:07.981316   66615 cri.go:89] found id: ""
	I0429 20:10:07.981349   66615 logs.go:276] 0 containers: []
	W0429 20:10:07.981361   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:07.981368   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:07.981429   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:08.024017   66615 cri.go:89] found id: ""
	I0429 20:10:08.024045   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.024056   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:08.024062   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:08.024146   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:08.075761   66615 cri.go:89] found id: ""
	I0429 20:10:08.075786   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.075798   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:08.075805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:08.075864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:08.146501   66615 cri.go:89] found id: ""
	I0429 20:10:08.146528   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.146536   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:08.146541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:08.146624   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:08.204987   66615 cri.go:89] found id: ""
	I0429 20:10:08.205013   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.205021   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:08.205027   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:08.205083   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:08.244930   66615 cri.go:89] found id: ""
	I0429 20:10:08.244959   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.244970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:08.244979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:08.245040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:08.284204   66615 cri.go:89] found id: ""
	I0429 20:10:08.284232   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.284243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:08.284250   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:08.284305   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:08.324077   66615 cri.go:89] found id: ""
	I0429 20:10:08.324102   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.324113   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:08.324123   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:08.324139   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:08.341584   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:08.341614   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:08.429808   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:08.429827   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:08.429840   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:08.509906   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:08.509942   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:08.562662   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:08.562697   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:07.557462   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:10.055718   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:09.553108   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:12.050533   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:12.543954   66218 pod_ready.go:81] duration metric: took 4m0.001047967s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" ...
	E0429 20:10:12.543994   66218 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 20:10:12.544032   66218 pod_ready.go:38] duration metric: took 4m6.615064199s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:10:12.544058   66218 kubeadm.go:591] duration metric: took 4m18.60301174s to restartPrimaryControlPlane
	W0429 20:10:12.544116   66218 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:12.544146   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:08.757127   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:10.760764   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:11.121673   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:11.137328   66615 kubeadm.go:591] duration metric: took 4m4.72832668s to restartPrimaryControlPlane
	W0429 20:10:11.137411   66615 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:11.137446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:13.254357   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.116867978s)
	I0429 20:10:13.254436   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:13.275293   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:13.287073   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:13.298046   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:13.298080   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:13.298132   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:13.311790   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:13.311861   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:13.323201   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:13.334284   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:13.334357   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:13.348597   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.361993   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:13.362055   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.376185   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:13.389715   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:13.389778   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:13.403955   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:13.675887   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:10:12.056403   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:14.059895   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:13.257345   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:15.257388   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:17.259138   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:16.557200   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:18.559617   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:19.756708   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:21.757655   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:21.056581   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:23.057477   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:24.256386   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:26.757303   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:25.556902   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:28.055172   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:30.056549   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:29.256790   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:31.757538   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:32.560174   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:35.056286   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:33.758717   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:36.257274   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:37.056603   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:39.557292   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:38.757913   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:40.758857   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:42.056927   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:44.557003   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:44.557038   66875 pod_ready.go:81] duration metric: took 4m0.008018273s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	E0429 20:10:44.557050   66875 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0429 20:10:44.557062   66875 pod_ready.go:38] duration metric: took 4m2.911025288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:10:44.557085   66875 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:10:44.557123   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:44.557191   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:44.620871   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:44.620900   66875 cri.go:89] found id: ""
	I0429 20:10:44.620910   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:44.620970   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.626852   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:44.626919   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:44.673726   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:44.673753   66875 cri.go:89] found id: ""
	I0429 20:10:44.673762   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:44.673827   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.680083   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:44.680157   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:44.724866   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:44.724899   66875 cri.go:89] found id: ""
	I0429 20:10:44.724909   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:44.724976   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.730438   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:44.730492   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:44.785159   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:44.785178   66875 cri.go:89] found id: ""
	I0429 20:10:44.785185   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:44.785230   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.790370   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:44.790432   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:44.839200   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:44.839219   66875 cri.go:89] found id: ""
	I0429 20:10:44.839226   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:44.839277   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.845411   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:44.845490   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:44.907184   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:44.907210   66875 cri.go:89] found id: ""
	I0429 20:10:44.907224   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:44.907281   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.914531   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:44.914596   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:44.957389   66875 cri.go:89] found id: ""
	I0429 20:10:44.957422   66875 logs.go:276] 0 containers: []
	W0429 20:10:44.957430   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:44.957436   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:44.957493   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:45.001760   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:45.001783   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:45.001789   66875 cri.go:89] found id: ""
	I0429 20:10:45.001796   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:45.001845   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:45.007293   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:45.012864   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:45.012886   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:45.406875   66218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.862702626s)
	I0429 20:10:45.406957   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:45.424927   66218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:45.436628   66218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:45.447896   66218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:45.447921   66218 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:45.447970   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:45.458604   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:45.458662   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:45.469701   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:45.479738   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:45.479796   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:45.490097   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:45.500840   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:45.500903   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:45.512918   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:45.524679   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:45.524756   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:45.536044   66218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:45.598481   66218 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:10:45.598556   66218 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:10:45.783162   66218 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:10:45.783321   66218 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:10:45.783481   66218 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:10:46.079842   66218 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:10:46.081981   66218 out.go:204]   - Generating certificates and keys ...
	I0429 20:10:46.082084   66218 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:10:46.082174   66218 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:10:46.082295   66218 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:10:46.082382   66218 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:10:46.082485   66218 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:10:46.082578   66218 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:10:46.082694   66218 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:10:46.082793   66218 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:10:46.082906   66218 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:10:46.082976   66218 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:10:46.083009   66218 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:10:46.083070   66218 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:10:46.242368   66218 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:10:46.667998   66218 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:10:46.832801   66218 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:10:47.033146   66218 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:10:47.265305   66218 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:10:47.266631   66218 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:10:47.271057   66218 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:10:47.273021   66218 out.go:204]   - Booting up control plane ...
	I0429 20:10:47.273128   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:10:47.273245   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:10:47.273333   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:10:47.293530   66218 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:10:47.294487   66218 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:10:47.294564   66218 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:10:47.435669   66218 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:10:47.435802   66218 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:10:43.256983   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:45.257106   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:47.757018   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:45.564197   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:45.564231   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:45.635133   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:45.635168   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:45.779957   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:45.779992   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:45.827796   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:45.827828   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:45.870603   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:45.870636   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:45.935181   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:45.935220   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:46.007476   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:46.007518   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:46.071132   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:46.071169   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:46.130185   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:46.130218   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:46.148649   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:46.148684   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:46.196227   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:46.196266   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:46.245663   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:46.245707   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:48.789522   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:48.810752   66875 api_server.go:72] duration metric: took 4m14.399329979s to wait for apiserver process to appear ...
	I0429 20:10:48.810785   66875 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:10:48.810826   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:48.810921   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:48.868391   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:48.868415   66875 cri.go:89] found id: ""
	I0429 20:10:48.868424   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:48.868490   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.874253   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:48.874329   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:48.934057   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:48.934103   66875 cri.go:89] found id: ""
	I0429 20:10:48.934113   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:48.934173   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.940161   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:48.940244   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:48.992205   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:48.992227   66875 cri.go:89] found id: ""
	I0429 20:10:48.992234   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:48.992297   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.997496   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:48.997568   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:49.038579   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:49.038612   66875 cri.go:89] found id: ""
	I0429 20:10:49.038622   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:49.038683   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.045062   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:49.045129   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:49.084533   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:49.084561   66875 cri.go:89] found id: ""
	I0429 20:10:49.084570   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:49.084628   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.089601   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:49.089680   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:49.133281   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:49.133315   66875 cri.go:89] found id: ""
	I0429 20:10:49.133324   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:49.133387   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.140784   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:49.140889   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:49.201071   66875 cri.go:89] found id: ""
	I0429 20:10:49.201102   66875 logs.go:276] 0 containers: []
	W0429 20:10:49.201112   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:49.201117   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:49.201182   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:49.248708   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:49.248732   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:49.248738   66875 cri.go:89] found id: ""
	I0429 20:10:49.248747   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:49.248807   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.254131   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.259257   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:49.259287   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:49.325386   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:49.325417   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:49.371335   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:49.371365   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:49.414056   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:49.414112   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:49.469457   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:49.469493   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:49.523091   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:49.523123   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:49.581937   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:49.581977   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:49.599704   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:49.599738   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:49.738943   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:49.738984   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:49.814482   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:49.814521   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:50.306035   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:50.306084   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:50.371400   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:50.371485   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:50.426578   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:50.426613   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:48.438095   66218 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002489157s
	I0429 20:10:48.438230   66218 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:10:49.758262   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:52.256578   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:53.941848   66218 kubeadm.go:309] [api-check] The API server is healthy after 5.503491397s
	I0429 20:10:53.961404   66218 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:10:53.979792   66218 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:10:54.018524   66218 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:10:54.018776   66218 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-456788 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:10:54.037050   66218 kubeadm.go:309] [bootstrap-token] Using token: 793n05.pmfi0tdyn7q4x0lt
	I0429 20:10:54.038421   66218 out.go:204]   - Configuring RBAC rules ...
	I0429 20:10:54.038551   66218 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:10:54.045190   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:10:54.054625   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:10:54.060216   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:10:54.068878   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:10:54.073537   66218 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:10:54.355285   66218 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:10:54.800956   66218 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:10:55.352995   66218 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:10:55.353026   66218 kubeadm.go:309] 
	I0429 20:10:55.353135   66218 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:10:55.353158   66218 kubeadm.go:309] 
	I0429 20:10:55.353245   66218 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:10:55.353254   66218 kubeadm.go:309] 
	I0429 20:10:55.353290   66218 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:10:55.353382   66218 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:10:55.353456   66218 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:10:55.353467   66218 kubeadm.go:309] 
	I0429 20:10:55.353564   66218 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:10:55.353578   66218 kubeadm.go:309] 
	I0429 20:10:55.353637   66218 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:10:55.353648   66218 kubeadm.go:309] 
	I0429 20:10:55.353735   66218 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:10:55.353937   66218 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:10:55.354052   66218 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:10:55.354095   66218 kubeadm.go:309] 
	I0429 20:10:55.354216   66218 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:10:55.354334   66218 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:10:55.354348   66218 kubeadm.go:309] 
	I0429 20:10:55.354464   66218 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 793n05.pmfi0tdyn7q4x0lt \
	I0429 20:10:55.354615   66218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 20:10:55.354643   66218 kubeadm.go:309] 	--control-plane 
	I0429 20:10:55.354667   66218 kubeadm.go:309] 
	I0429 20:10:55.354799   66218 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:10:55.354810   66218 kubeadm.go:309] 
	I0429 20:10:55.354943   66218 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 793n05.pmfi0tdyn7q4x0lt \
	I0429 20:10:55.355111   66218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 20:10:55.355493   66218 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:10:55.355513   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:10:55.355520   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:10:55.357341   66218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:10:52.999575   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:10:53.005598   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 200:
	ok
	I0429 20:10:53.006923   66875 api_server.go:141] control plane version: v1.30.0
	I0429 20:10:53.006951   66875 api_server.go:131] duration metric: took 4.196158371s to wait for apiserver health ...
	I0429 20:10:53.006978   66875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:10:53.007011   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:53.007073   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:53.064156   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:53.064186   66875 cri.go:89] found id: ""
	I0429 20:10:53.064196   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:53.064256   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.069282   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:53.069361   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:53.128981   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:53.129016   66875 cri.go:89] found id: ""
	I0429 20:10:53.129025   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:53.129086   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.134680   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:53.134779   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:53.188828   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:53.188857   66875 cri.go:89] found id: ""
	I0429 20:10:53.188869   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:53.188922   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.195332   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:53.195401   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:53.245528   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:53.245548   66875 cri.go:89] found id: ""
	I0429 20:10:53.245556   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:53.245617   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.251849   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:53.251925   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:53.302914   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:53.302941   66875 cri.go:89] found id: ""
	I0429 20:10:53.302950   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:53.303004   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.308072   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:53.308138   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:53.358655   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:53.358684   66875 cri.go:89] found id: ""
	I0429 20:10:53.358693   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:53.358753   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.363796   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:53.363875   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:53.413543   66875 cri.go:89] found id: ""
	I0429 20:10:53.413573   66875 logs.go:276] 0 containers: []
	W0429 20:10:53.413586   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:53.413593   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:53.413651   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:53.457365   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:53.457393   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:53.457399   66875 cri.go:89] found id: ""
	I0429 20:10:53.457409   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:53.457473   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.464321   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.469358   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:53.469377   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:53.605546   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:53.605594   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:53.682788   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:53.682837   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:53.725985   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:53.726017   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:53.775864   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:53.775890   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:53.834762   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:53.834801   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:53.853796   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:53.853830   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:53.915651   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:53.915680   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:53.968857   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:53.968885   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:54.024061   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:54.024090   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:54.079637   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:54.079674   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:54.129296   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:54.129325   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:54.499803   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:54.499861   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:57.070245   66875 system_pods.go:59] 8 kube-system pods found
	I0429 20:10:57.070288   66875 system_pods.go:61] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running
	I0429 20:10:57.070296   66875 system_pods.go:61] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running
	I0429 20:10:57.070302   66875 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running
	I0429 20:10:57.070308   66875 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running
	I0429 20:10:57.070313   66875 system_pods.go:61] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running
	I0429 20:10:57.070318   66875 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running
	I0429 20:10:57.070329   66875 system_pods.go:61] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:10:57.070339   66875 system_pods.go:61] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running
	I0429 20:10:57.070353   66875 system_pods.go:74] duration metric: took 4.063366088s to wait for pod list to return data ...
	I0429 20:10:57.070366   66875 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:10:57.077008   66875 default_sa.go:45] found service account: "default"
	I0429 20:10:57.077031   66875 default_sa.go:55] duration metric: took 6.655489ms for default service account to be created ...
	I0429 20:10:57.077040   66875 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:10:57.087665   66875 system_pods.go:86] 8 kube-system pods found
	I0429 20:10:57.087695   66875 system_pods.go:89] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running
	I0429 20:10:57.087701   66875 system_pods.go:89] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running
	I0429 20:10:57.087707   66875 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running
	I0429 20:10:57.087711   66875 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running
	I0429 20:10:57.087715   66875 system_pods.go:89] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running
	I0429 20:10:57.087719   66875 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running
	I0429 20:10:57.087726   66875 system_pods.go:89] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:10:57.087730   66875 system_pods.go:89] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running
	I0429 20:10:57.087740   66875 system_pods.go:126] duration metric: took 10.694398ms to wait for k8s-apps to be running ...
	I0429 20:10:57.087749   66875 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:10:57.087794   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:57.106878   66875 system_svc.go:56] duration metric: took 19.118595ms WaitForService to wait for kubelet
	I0429 20:10:57.106917   66875 kubeadm.go:576] duration metric: took 4m22.695498557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:10:57.106945   66875 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:10:57.111052   66875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:10:57.111082   66875 node_conditions.go:123] node cpu capacity is 2
	I0429 20:10:57.111096   66875 node_conditions.go:105] duration metric: took 4.144283ms to run NodePressure ...
	I0429 20:10:57.111112   66875 start.go:240] waiting for startup goroutines ...
	I0429 20:10:57.111122   66875 start.go:245] waiting for cluster config update ...
	I0429 20:10:57.111141   66875 start.go:254] writing updated cluster config ...
	I0429 20:10:57.111536   66875 ssh_runner.go:195] Run: rm -f paused
	I0429 20:10:57.169536   66875 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:10:57.172347   66875 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-866143" cluster and "default" namespace by default
	I0429 20:10:55.358683   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:10:55.371397   66218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:10:55.397119   66218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:10:55.397192   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:55.397192   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-456788 minikube.k8s.io/updated_at=2024_04_29T20_10_55_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=no-preload-456788 minikube.k8s.io/primary=true
	I0429 20:10:55.605222   66218 ops.go:34] apiserver oom_adj: -16
	I0429 20:10:55.605588   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:56.106450   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:56.605894   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:57.105657   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:57.605823   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:54.258101   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:56.258336   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:58.106263   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:58.605675   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:59.106483   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:59.605671   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:00.105670   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:00.605695   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:01.106482   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:01.606206   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:02.106534   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:02.606372   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:58.756416   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:00.756875   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:02.756955   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:03.106555   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:03.606298   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.106227   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.606531   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:05.105708   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:05.605735   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:06.106556   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:06.606380   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:07.105690   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:07.605718   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.749964   65980 pod_ready.go:81] duration metric: took 4m0.000195525s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" ...
	E0429 20:11:04.749999   65980 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 20:11:04.750024   65980 pod_ready.go:38] duration metric: took 4m6.211964949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:04.750053   65980 kubeadm.go:591] duration metric: took 4m17.268163648s to restartPrimaryControlPlane
	W0429 20:11:04.750123   65980 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:11:04.750156   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:11:08.106383   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:08.606498   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:08.726533   66218 kubeadm.go:1107] duration metric: took 13.329402445s to wait for elevateKubeSystemPrivileges
	W0429 20:11:08.726584   66218 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:11:08.726596   66218 kubeadm.go:393] duration metric: took 5m14.838913251s to StartCluster
	I0429 20:11:08.726617   66218 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:08.726706   66218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:11:08.729364   66218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:08.730202   66218 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:11:08.731600   66218 out.go:177] * Verifying Kubernetes components...
	I0429 20:11:08.730245   66218 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:11:08.730446   66218 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:11:08.733479   66218 addons.go:69] Setting storage-provisioner=true in profile "no-preload-456788"
	I0429 20:11:08.733509   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:11:08.733518   66218 addons.go:69] Setting default-storageclass=true in profile "no-preload-456788"
	I0429 20:11:08.733540   66218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-456788"
	I0429 20:11:08.733514   66218 addons.go:234] Setting addon storage-provisioner=true in "no-preload-456788"
	W0429 20:11:08.733641   66218 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:11:08.733674   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.733963   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.733988   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.734081   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.734079   66218 addons.go:69] Setting metrics-server=true in profile "no-preload-456788"
	I0429 20:11:08.734106   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.734117   66218 addons.go:234] Setting addon metrics-server=true in "no-preload-456788"
	W0429 20:11:08.734126   66218 addons.go:243] addon metrics-server should already be in state true
	I0429 20:11:08.734154   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.734503   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.734536   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.754451   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0429 20:11:08.754650   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0429 20:11:08.754827   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I0429 20:11:08.755114   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755237   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755332   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755884   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.755905   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756031   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.756048   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.756050   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756062   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756456   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756477   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756513   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756853   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.757231   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.757254   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.757256   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.757291   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.761534   66218 addons.go:234] Setting addon default-storageclass=true in "no-preload-456788"
	W0429 20:11:08.761551   66218 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:11:08.761574   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.761857   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.761894   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.776659   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0429 20:11:08.776838   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0429 20:11:08.777067   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.777462   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.777643   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.777657   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.778152   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.778162   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.778170   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.778371   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.778845   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.778901   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0429 20:11:08.779220   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.779415   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.779446   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.779621   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.779634   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.780051   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.780246   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.780506   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.782432   66218 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:11:08.783809   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:11:08.783825   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:11:08.783843   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.782370   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.786004   66218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:11:08.787488   66218 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:08.787506   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:11:08.787663   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.788245   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.788290   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.788308   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.788381   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.788632   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.788834   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.788985   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:08.791587   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.791964   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.792052   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.792293   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.792477   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.792614   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.792712   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:08.798944   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I0429 20:11:08.799562   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.800224   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.800243   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.800790   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.801008   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.803220   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.803519   66218 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:08.803534   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:11:08.803552   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.806797   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.807216   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.807244   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.807540   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.807986   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.808170   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.808313   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:09.006753   66218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:11:09.038156   66218 node_ready.go:35] waiting up to 6m0s for node "no-preload-456788" to be "Ready" ...
	I0429 20:11:09.051516   66218 node_ready.go:49] node "no-preload-456788" has status "Ready":"True"
	I0429 20:11:09.051545   66218 node_ready.go:38] duration metric: took 13.34705ms for node "no-preload-456788" to be "Ready" ...
	I0429 20:11:09.051557   66218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:09.064032   66218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:09.308339   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:09.308749   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:11:09.308773   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:11:09.309961   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:09.347829   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:11:09.347860   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:11:09.466683   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:11:09.466718   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:11:09.678800   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:11:09.718867   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.718899   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.719248   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.719276   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.719273   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:09.719288   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.719296   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.719553   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.719574   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.719581   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:09.726177   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.726204   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.726527   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.726544   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.726590   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:10.570942   66218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.260944092s)
	I0429 20:11:10.571001   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.571012   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.571480   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.571504   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.571520   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.571528   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.571792   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:10.571818   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.571833   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.912211   66218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.233359134s)
	I0429 20:11:10.912282   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.912298   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.912746   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.912769   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.912779   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.912787   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.913055   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.913108   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.913132   66218 addons.go:470] Verifying addon metrics-server=true in "no-preload-456788"
	I0429 20:11:10.916694   66218 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0429 20:11:10.918273   66218 addons.go:505] duration metric: took 2.188028967s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0429 20:11:11.108067   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.108091   66218 pod_ready.go:81] duration metric: took 2.044032617s for pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.108103   66218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.115163   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.115196   66218 pod_ready.go:81] duration metric: took 7.084503ms for pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.115210   66218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.129264   66218 pod_ready.go:92] pod "etcd-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.129286   66218 pod_ready.go:81] duration metric: took 14.068541ms for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.129297   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.148114   66218 pod_ready.go:92] pod "kube-apiserver-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.148142   66218 pod_ready.go:81] duration metric: took 18.837962ms for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.148155   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.157985   66218 pod_ready.go:92] pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.158006   66218 pod_ready.go:81] duration metric: took 9.844321ms for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.158016   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6m95d" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.469680   66218 pod_ready.go:92] pod "kube-proxy-6m95d" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.469701   66218 pod_ready.go:81] duration metric: took 311.678646ms for pod "kube-proxy-6m95d" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.469710   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.868513   66218 pod_ready.go:92] pod "kube-scheduler-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.868539   66218 pod_ready.go:81] duration metric: took 398.821528ms for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.868550   66218 pod_ready.go:38] duration metric: took 2.816983409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:11.868569   66218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:11:11.868632   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:11:11.885115   66218 api_server.go:72] duration metric: took 3.154873937s to wait for apiserver process to appear ...
	I0429 20:11:11.885146   66218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:11:11.885169   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:11:11.890715   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0429 20:11:11.891649   66218 api_server.go:141] control plane version: v1.30.0
	I0429 20:11:11.891671   66218 api_server.go:131] duration metric: took 6.518818ms to wait for apiserver health ...
	I0429 20:11:11.891679   66218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:11:12.072142   66218 system_pods.go:59] 9 kube-system pods found
	I0429 20:11:12.072175   66218 system_pods.go:61] "coredns-7db6d8ff4d-hcfbq" [c0b53824-478e-4523-ada4-1cd7ba306c81] Running
	I0429 20:11:12.072183   66218 system_pods.go:61] "coredns-7db6d8ff4d-pvhwv" [f38ee7b3-53fe-4609-9b2b-000f55de5d5c] Running
	I0429 20:11:12.072188   66218 system_pods.go:61] "etcd-no-preload-456788" [b0629d4c-643a-485d-aa85-33fe009fff50] Running
	I0429 20:11:12.072194   66218 system_pods.go:61] "kube-apiserver-no-preload-456788" [e56edf5c-9883-4cd9-abab-09902048f584] Running
	I0429 20:11:12.072200   66218 system_pods.go:61] "kube-controller-manager-no-preload-456788" [bfaf44f0-da19-4cec-bec9-d9917cb8a571] Running
	I0429 20:11:12.072205   66218 system_pods.go:61] "kube-proxy-6m95d" [25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7] Running
	I0429 20:11:12.072209   66218 system_pods.go:61] "kube-scheduler-no-preload-456788" [de4f90f7-05d6-4755-a4c0-2c522f7fe88c] Running
	I0429 20:11:12.072217   66218 system_pods.go:61] "metrics-server-569cc877fc-sxgwr" [046d28fe-d51e-43ba-9550-d1d7e33d9d84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:11:12.072224   66218 system_pods.go:61] "storage-provisioner" [fd1c4813-8889-4f21-b21e-6007eaa163a6] Running
	I0429 20:11:12.072247   66218 system_pods.go:74] duration metric: took 180.561509ms to wait for pod list to return data ...
	I0429 20:11:12.072256   66218 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:11:12.267637   66218 default_sa.go:45] found service account: "default"
	I0429 20:11:12.267663   66218 default_sa.go:55] duration metric: took 195.398841ms for default service account to be created ...
	I0429 20:11:12.267677   66218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:11:12.471933   66218 system_pods.go:86] 9 kube-system pods found
	I0429 20:11:12.471967   66218 system_pods.go:89] "coredns-7db6d8ff4d-hcfbq" [c0b53824-478e-4523-ada4-1cd7ba306c81] Running
	I0429 20:11:12.471975   66218 system_pods.go:89] "coredns-7db6d8ff4d-pvhwv" [f38ee7b3-53fe-4609-9b2b-000f55de5d5c] Running
	I0429 20:11:12.471981   66218 system_pods.go:89] "etcd-no-preload-456788" [b0629d4c-643a-485d-aa85-33fe009fff50] Running
	I0429 20:11:12.471987   66218 system_pods.go:89] "kube-apiserver-no-preload-456788" [e56edf5c-9883-4cd9-abab-09902048f584] Running
	I0429 20:11:12.471994   66218 system_pods.go:89] "kube-controller-manager-no-preload-456788" [bfaf44f0-da19-4cec-bec9-d9917cb8a571] Running
	I0429 20:11:12.471999   66218 system_pods.go:89] "kube-proxy-6m95d" [25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7] Running
	I0429 20:11:12.472008   66218 system_pods.go:89] "kube-scheduler-no-preload-456788" [de4f90f7-05d6-4755-a4c0-2c522f7fe88c] Running
	I0429 20:11:12.472020   66218 system_pods.go:89] "metrics-server-569cc877fc-sxgwr" [046d28fe-d51e-43ba-9550-d1d7e33d9d84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:11:12.472027   66218 system_pods.go:89] "storage-provisioner" [fd1c4813-8889-4f21-b21e-6007eaa163a6] Running
	I0429 20:11:12.472039   66218 system_pods.go:126] duration metric: took 204.355515ms to wait for k8s-apps to be running ...
	I0429 20:11:12.472052   66218 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:11:12.472110   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:11:12.487748   66218 system_svc.go:56] duration metric: took 15.68796ms WaitForService to wait for kubelet
	I0429 20:11:12.487779   66218 kubeadm.go:576] duration metric: took 3.757538662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:11:12.487804   66218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:11:12.668597   66218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:11:12.668619   66218 node_conditions.go:123] node cpu capacity is 2
	I0429 20:11:12.668629   66218 node_conditions.go:105] duration metric: took 180.819727ms to run NodePressure ...
	I0429 20:11:12.668640   66218 start.go:240] waiting for startup goroutines ...
	I0429 20:11:12.668646   66218 start.go:245] waiting for cluster config update ...
	I0429 20:11:12.668656   66218 start.go:254] writing updated cluster config ...
	I0429 20:11:12.668905   66218 ssh_runner.go:195] Run: rm -f paused
	I0429 20:11:12.718997   66218 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:11:12.720757   66218 out.go:177] * Done! kubectl is now configured to use "no-preload-456788" cluster and "default" namespace by default
	I0429 20:11:37.819019   65980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.068841912s)
	I0429 20:11:37.819092   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:11:37.836850   65980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:11:37.849684   65980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:11:37.861597   65980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:11:37.861626   65980 kubeadm.go:156] found existing configuration files:
	
	I0429 20:11:37.861674   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:11:37.872799   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:11:37.872860   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:11:37.884336   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:11:37.895124   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:11:37.895181   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:11:37.906874   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:11:37.917482   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:11:37.917530   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:11:37.928137   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:11:37.938698   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:11:37.938750   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:11:37.949658   65980 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:11:38.159358   65980 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:11:46.848042   65980 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:11:46.848108   65980 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:11:46.848169   65980 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:11:46.848308   65980 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:11:46.848447   65980 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:11:46.848531   65980 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:11:46.850368   65980 out.go:204]   - Generating certificates and keys ...
	I0429 20:11:46.850444   65980 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:11:46.850496   65980 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:11:46.850580   65980 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:11:46.850649   65980 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:11:46.850742   65980 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:11:46.850850   65980 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:11:46.850949   65980 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:11:46.851018   65980 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:11:46.851117   65980 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:11:46.851201   65980 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:11:46.851263   65980 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:11:46.851327   65980 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:11:46.851395   65980 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:11:46.851466   65980 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:11:46.851513   65980 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:11:46.851605   65980 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:11:46.851690   65980 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:11:46.851791   65980 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:11:46.851878   65980 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:11:46.853420   65980 out.go:204]   - Booting up control plane ...
	I0429 20:11:46.853526   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:11:46.853617   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:11:46.853696   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:11:46.853791   65980 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:11:46.853866   65980 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:11:46.853900   65980 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:11:46.854010   65980 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:11:46.854094   65980 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:11:46.854148   65980 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.976221ms
	I0429 20:11:46.854240   65980 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:11:46.854311   65980 kubeadm.go:309] [api-check] The API server is healthy after 5.50298765s
	I0429 20:11:46.854407   65980 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:11:46.854509   65980 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:11:46.854565   65980 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:11:46.854726   65980 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-161370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:11:46.854783   65980 kubeadm.go:309] [bootstrap-token] Using token: 93xwhj.zowa67wvl54p1iru
	I0429 20:11:46.856308   65980 out.go:204]   - Configuring RBAC rules ...
	I0429 20:11:46.856452   65980 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:11:46.856561   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:11:46.856736   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:11:46.856867   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:11:46.857018   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:11:46.857140   65980 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:11:46.857294   65980 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:11:46.857358   65980 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:11:46.857419   65980 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:11:46.857428   65980 kubeadm.go:309] 
	I0429 20:11:46.857502   65980 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:11:46.857514   65980 kubeadm.go:309] 
	I0429 20:11:46.857606   65980 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:11:46.857617   65980 kubeadm.go:309] 
	I0429 20:11:46.857649   65980 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:11:46.857725   65980 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:11:46.857797   65980 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:11:46.857806   65980 kubeadm.go:309] 
	I0429 20:11:46.857880   65980 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:11:46.857889   65980 kubeadm.go:309] 
	I0429 20:11:46.857947   65980 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:11:46.857955   65980 kubeadm.go:309] 
	I0429 20:11:46.858020   65980 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:11:46.858125   65980 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:11:46.858216   65980 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:11:46.858224   65980 kubeadm.go:309] 
	I0429 20:11:46.858325   65980 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:11:46.858433   65980 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:11:46.858442   65980 kubeadm.go:309] 
	I0429 20:11:46.858553   65980 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 93xwhj.zowa67wvl54p1iru \
	I0429 20:11:46.858696   65980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 20:11:46.858722   65980 kubeadm.go:309] 	--control-plane 
	I0429 20:11:46.858728   65980 kubeadm.go:309] 
	I0429 20:11:46.858797   65980 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:11:46.858803   65980 kubeadm.go:309] 
	I0429 20:11:46.858881   65980 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 93xwhj.zowa67wvl54p1iru \
	I0429 20:11:46.859014   65980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 20:11:46.859025   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:11:46.859034   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:11:46.861619   65980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:11:46.863111   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:11:46.875965   65980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:11:46.897147   65980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:11:46.897225   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:46.897238   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-161370 minikube.k8s.io/updated_at=2024_04_29T20_11_46_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=embed-certs-161370 minikube.k8s.io/primary=true
	I0429 20:11:46.927555   65980 ops.go:34] apiserver oom_adj: -16
	I0429 20:11:47.119594   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:47.620640   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:48.119974   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:48.620618   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:49.120107   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:49.620349   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:50.120180   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:50.620533   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:51.120332   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:51.620669   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:52.119922   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:52.620467   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:53.120486   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:53.620314   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:54.120159   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:54.620430   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:55.119995   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:55.620496   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:56.120152   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:56.620390   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:57.120090   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:57.619671   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:58.120549   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:58.620334   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.120532   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.619732   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.765502   65980 kubeadm.go:1107] duration metric: took 12.868344365s to wait for elevateKubeSystemPrivileges
	W0429 20:11:59.765550   65980 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:11:59.765561   65980 kubeadm.go:393] duration metric: took 5m12.339650014s to StartCluster
	I0429 20:11:59.765582   65980 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:59.765671   65980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:11:59.767924   65980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:59.768253   65980 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:11:59.769950   65980 out.go:177] * Verifying Kubernetes components...
	I0429 20:11:59.768323   65980 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:11:59.768433   65980 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:11:59.771281   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:11:59.771300   65980 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-161370"
	I0429 20:11:59.771313   65980 addons.go:69] Setting default-storageclass=true in profile "embed-certs-161370"
	I0429 20:11:59.771332   65980 addons.go:69] Setting metrics-server=true in profile "embed-certs-161370"
	I0429 20:11:59.771344   65980 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-161370"
	W0429 20:11:59.771355   65980 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:11:59.771361   65980 addons.go:234] Setting addon metrics-server=true in "embed-certs-161370"
	W0429 20:11:59.771370   65980 addons.go:243] addon metrics-server should already be in state true
	I0429 20:11:59.771399   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.771401   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.771354   65980 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-161370"
	I0429 20:11:59.771757   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771768   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771772   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771783   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.771786   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.771788   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.787359   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0429 20:11:59.787384   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0429 20:11:59.787503   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46153
	I0429 20:11:59.787764   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.787987   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.788069   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.788254   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788273   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.788708   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788724   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.788773   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.788832   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788852   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.789102   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.789117   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.789267   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.789478   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.789510   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.790170   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.790220   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.792108   65980 addons.go:234] Setting addon default-storageclass=true in "embed-certs-161370"
	W0429 20:11:59.792127   65980 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:11:59.792154   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.792386   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.792424   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.808581   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35659
	I0429 20:11:59.808924   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44943
	I0429 20:11:59.808943   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.809461   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.809481   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.809561   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.809791   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.810335   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.810357   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.810976   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.810992   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.811324   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.811604   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0429 20:11:59.811758   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.812141   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.812592   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.812610   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.813130   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.813351   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.813614   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.815589   65980 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:11:59.817004   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:11:59.817014   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:11:59.817027   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.815020   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.818585   65980 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:11:59.820110   65980 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:59.820125   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:11:59.820140   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.819840   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.820305   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.820333   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.820563   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.820722   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.820874   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.820998   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.822849   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.823299   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.823323   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.823460   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.823599   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.823924   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.824039   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.827552   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0429 20:11:59.827976   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.828369   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.828389   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.828754   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.828921   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.830295   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.830566   65980 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:59.830578   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:11:59.830590   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.833174   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.833526   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.833545   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.833759   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.833910   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.834029   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.834166   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.978978   65980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:11:59.995547   65980 node_ready.go:35] waiting up to 6m0s for node "embed-certs-161370" to be "Ready" ...
	I0429 20:12:00.003802   65980 node_ready.go:49] node "embed-certs-161370" has status "Ready":"True"
	I0429 20:12:00.003823   65980 node_ready.go:38] duration metric: took 8.245639ms for node "embed-certs-161370" to be "Ready" ...
	I0429 20:12:00.003833   65980 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:12:00.010487   65980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:00.072627   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:12:00.075716   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:12:00.177043   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:12:00.177069   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:12:00.278082   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:12:00.278112   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:12:00.311731   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:12:00.311756   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:12:00.369982   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:12:00.642840   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.642865   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643084   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.643109   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643227   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.643240   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.643248   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.643256   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643374   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.645085   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645103   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.645112   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.645121   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.645196   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645228   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.645231   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.645331   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645343   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.658929   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.658955   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.659236   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.659267   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.659281   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.103183   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:01.103207   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:01.103488   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:01.103542   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.103557   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:01.103541   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:01.103584   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:01.105440   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:01.105461   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.105473   65980 addons.go:470] Verifying addon metrics-server=true in "embed-certs-161370"
	I0429 20:12:01.107435   65980 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0429 20:12:01.109051   65980 addons.go:505] duration metric: took 1.340729876s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0429 20:12:02.029772   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace has status "Ready":"False"
	I0429 20:12:02.520396   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.520417   65980 pod_ready.go:81] duration metric: took 2.509903724s for pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.520426   65980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.529115   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.529141   65980 pod_ready.go:81] duration metric: took 8.707165ms for pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.529153   65980 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.539459   65980 pod_ready.go:92] pod "etcd-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.539478   65980 pod_ready.go:81] duration metric: took 10.318294ms for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.539489   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.543813   65980 pod_ready.go:92] pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.543830   65980 pod_ready.go:81] duration metric: took 4.333619ms for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.543839   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.549343   65980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.549363   65980 pod_ready.go:81] duration metric: took 5.516323ms for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.549374   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wq48j" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.915209   65980 pod_ready.go:92] pod "kube-proxy-wq48j" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.915232   65980 pod_ready.go:81] duration metric: took 365.851814ms for pod "kube-proxy-wq48j" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.915240   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:03.315564   65980 pod_ready.go:92] pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:03.315587   65980 pod_ready.go:81] duration metric: took 400.340876ms for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:03.315595   65980 pod_ready.go:38] duration metric: took 3.311752591s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:12:03.315609   65980 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:12:03.315655   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:12:03.333491   65980 api_server.go:72] duration metric: took 3.565200855s to wait for apiserver process to appear ...
	I0429 20:12:03.333521   65980 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:12:03.333538   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:12:03.338822   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0429 20:12:03.339975   65980 api_server.go:141] control plane version: v1.30.0
	I0429 20:12:03.339995   65980 api_server.go:131] duration metric: took 6.468233ms to wait for apiserver health ...
	I0429 20:12:03.340002   65980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:12:03.519016   65980 system_pods.go:59] 9 kube-system pods found
	I0429 20:12:03.519042   65980 system_pods.go:61] "coredns-7db6d8ff4d-7z6zv" [422451a2-615d-4bf8-8de8-d5fa5805219f] Running
	I0429 20:12:03.519047   65980 system_pods.go:61] "coredns-7db6d8ff4d-rr6bd" [6d14ff20-6dab-4c02-b91c-0a1e326f1593] Running
	I0429 20:12:03.519050   65980 system_pods.go:61] "etcd-embed-certs-161370" [ab19e79c-18bd-4d0d-b5cf-639453495383] Running
	I0429 20:12:03.519055   65980 system_pods.go:61] "kube-apiserver-embed-certs-161370" [6091dd0a-333d-4729-97db-eb7a30755db4] Running
	I0429 20:12:03.519059   65980 system_pods.go:61] "kube-controller-manager-embed-certs-161370" [de70d57c-9329-4d37-a838-9c9ae1e41871] Running
	I0429 20:12:03.519061   65980 system_pods.go:61] "kube-proxy-wq48j" [3b3b23ef-b5b4-4754-bc44-73e1d51a18d7] Running
	I0429 20:12:03.519065   65980 system_pods.go:61] "kube-scheduler-embed-certs-161370" [c7fd3d36-4e35-43b2-93e7-45129464937d] Running
	I0429 20:12:03.519071   65980 system_pods.go:61] "metrics-server-569cc877fc-x2wb6" [cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:12:03.519075   65980 system_pods.go:61] "storage-provisioner" [93e046a1-3867-44e1-8a4f-cf0eba6dfd6b] Running
	I0429 20:12:03.519082   65980 system_pods.go:74] duration metric: took 179.075384ms to wait for pod list to return data ...
	I0429 20:12:03.519089   65980 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:12:03.714354   65980 default_sa.go:45] found service account: "default"
	I0429 20:12:03.714384   65980 default_sa.go:55] duration metric: took 195.287433ms for default service account to be created ...
	I0429 20:12:03.714395   65980 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:12:03.918729   65980 system_pods.go:86] 9 kube-system pods found
	I0429 20:12:03.918755   65980 system_pods.go:89] "coredns-7db6d8ff4d-7z6zv" [422451a2-615d-4bf8-8de8-d5fa5805219f] Running
	I0429 20:12:03.918760   65980 system_pods.go:89] "coredns-7db6d8ff4d-rr6bd" [6d14ff20-6dab-4c02-b91c-0a1e326f1593] Running
	I0429 20:12:03.918765   65980 system_pods.go:89] "etcd-embed-certs-161370" [ab19e79c-18bd-4d0d-b5cf-639453495383] Running
	I0429 20:12:03.918769   65980 system_pods.go:89] "kube-apiserver-embed-certs-161370" [6091dd0a-333d-4729-97db-eb7a30755db4] Running
	I0429 20:12:03.918773   65980 system_pods.go:89] "kube-controller-manager-embed-certs-161370" [de70d57c-9329-4d37-a838-9c9ae1e41871] Running
	I0429 20:12:03.918777   65980 system_pods.go:89] "kube-proxy-wq48j" [3b3b23ef-b5b4-4754-bc44-73e1d51a18d7] Running
	I0429 20:12:03.918780   65980 system_pods.go:89] "kube-scheduler-embed-certs-161370" [c7fd3d36-4e35-43b2-93e7-45129464937d] Running
	I0429 20:12:03.918787   65980 system_pods.go:89] "metrics-server-569cc877fc-x2wb6" [cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:12:03.918791   65980 system_pods.go:89] "storage-provisioner" [93e046a1-3867-44e1-8a4f-cf0eba6dfd6b] Running
	I0429 20:12:03.918800   65980 system_pods.go:126] duration metric: took 204.399385ms to wait for k8s-apps to be running ...
	I0429 20:12:03.918809   65980 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:12:03.918851   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:03.937870   65980 system_svc.go:56] duration metric: took 19.05503ms WaitForService to wait for kubelet
	I0429 20:12:03.937892   65980 kubeadm.go:576] duration metric: took 4.169607456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:12:03.937910   65980 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:12:04.116479   65980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:12:04.116504   65980 node_conditions.go:123] node cpu capacity is 2
	I0429 20:12:04.116513   65980 node_conditions.go:105] duration metric: took 178.599246ms to run NodePressure ...
	I0429 20:12:04.116524   65980 start.go:240] waiting for startup goroutines ...
	I0429 20:12:04.116530   65980 start.go:245] waiting for cluster config update ...
	I0429 20:12:04.116540   65980 start.go:254] writing updated cluster config ...
	I0429 20:12:04.116799   65980 ssh_runner.go:195] Run: rm -f paused
	I0429 20:12:04.167803   65980 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:12:04.169861   65980 out.go:177] * Done! kubectl is now configured to use "embed-certs-161370" cluster and "default" namespace by default
	I0429 20:12:09.853929   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:12:09.854036   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:12:09.856141   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:12:09.856215   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:12:09.856314   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:12:09.856435   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:12:09.856529   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:12:09.856638   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:12:09.858658   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:12:09.858759   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:12:09.858821   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:12:09.858914   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:12:09.858967   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:12:09.859049   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:12:09.859118   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:12:09.859197   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:12:09.859311   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:12:09.859435   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:12:09.859548   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:12:09.859605   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:12:09.859678   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:12:09.859766   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:12:09.859856   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:12:09.859947   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:12:09.860025   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:12:09.860149   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:12:09.860228   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:12:09.860289   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:12:09.860390   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:12:09.862098   66615 out.go:204]   - Booting up control plane ...
	I0429 20:12:09.862211   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:12:09.862298   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:12:09.862360   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:12:09.862484   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:12:09.862720   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:12:09.862794   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:12:09.862882   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863117   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863244   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863470   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863544   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863814   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863895   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864144   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864223   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864393   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864408   66615 kubeadm.go:309] 
	I0429 20:12:09.864473   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:12:09.864526   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:12:09.864543   66615 kubeadm.go:309] 
	I0429 20:12:09.864589   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:12:09.864638   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:12:09.864779   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:12:09.864789   66615 kubeadm.go:309] 
	I0429 20:12:09.864911   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:12:09.864971   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:12:09.865026   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:12:09.865033   66615 kubeadm.go:309] 
	I0429 20:12:09.865150   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:12:09.865228   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:12:09.865241   66615 kubeadm.go:309] 
	I0429 20:12:09.865404   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:12:09.865538   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:12:09.865651   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:12:09.865755   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:12:09.865828   66615 kubeadm.go:309] 
	W0429 20:12:09.865940   66615 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0429 20:12:09.866027   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:12:10.987703   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.121642991s)
	I0429 20:12:10.987802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:11.007295   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:12:11.020772   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:12:11.020790   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:12:11.020838   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:12:11.033334   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:12:11.033405   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:12:11.044565   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:12:11.057087   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:12:11.057143   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:12:11.069908   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.082866   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:12:11.082920   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.096659   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:12:11.110106   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:12:11.110166   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:12:11.124952   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:12:11.396252   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:14:07.831448   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:14:07.831556   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:14:07.833111   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:14:07.833179   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:14:07.833288   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:14:07.833421   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:14:07.833530   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:14:07.833616   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:14:07.835518   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:14:07.835623   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:14:07.835703   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:14:07.835776   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:14:07.835839   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:14:07.835893   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:14:07.835957   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:14:07.836039   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:14:07.836129   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:14:07.836238   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:14:07.836350   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:14:07.836394   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:14:07.836441   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:14:07.836488   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:14:07.836559   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:14:07.836637   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:14:07.836683   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:14:07.836778   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:14:07.836854   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:14:07.836895   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:14:07.836950   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:14:07.838553   66615 out.go:204]   - Booting up control plane ...
	I0429 20:14:07.838635   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:14:07.838718   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:14:07.838836   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:14:07.838918   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:14:07.839069   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:14:07.839126   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:14:07.839180   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839369   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839450   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839654   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839779   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840008   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840076   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840322   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840380   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840571   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840594   66615 kubeadm.go:309] 
	I0429 20:14:07.840637   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:14:07.840673   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:14:07.840682   66615 kubeadm.go:309] 
	I0429 20:14:07.840715   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:14:07.840745   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:14:07.840844   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:14:07.840857   66615 kubeadm.go:309] 
	I0429 20:14:07.840969   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:14:07.841022   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:14:07.841073   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:14:07.841083   66615 kubeadm.go:309] 
	I0429 20:14:07.841184   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:14:07.841315   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:14:07.841325   66615 kubeadm.go:309] 
	I0429 20:14:07.841454   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:14:07.841550   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:14:07.841632   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:14:07.841697   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:14:07.841760   66615 kubeadm.go:393] duration metric: took 8m1.501853767s to StartCluster
	I0429 20:14:07.841781   66615 kubeadm.go:309] 
	I0429 20:14:07.841800   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:14:07.841853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:14:07.898194   66615 cri.go:89] found id: ""
	I0429 20:14:07.898227   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.898237   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:14:07.898244   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:14:07.898316   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:14:07.938873   66615 cri.go:89] found id: ""
	I0429 20:14:07.938903   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.938914   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:14:07.938921   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:14:07.938979   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:14:07.980523   66615 cri.go:89] found id: ""
	I0429 20:14:07.980551   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.980559   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:14:07.980565   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:14:07.980612   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:14:08.021334   66615 cri.go:89] found id: ""
	I0429 20:14:08.021366   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.021377   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:14:08.021389   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:14:08.021446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:14:08.060598   66615 cri.go:89] found id: ""
	I0429 20:14:08.060636   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.060648   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:14:08.060655   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:14:08.060716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:14:08.101689   66615 cri.go:89] found id: ""
	I0429 20:14:08.101715   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.101723   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:14:08.101729   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:14:08.101786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:14:08.143295   66615 cri.go:89] found id: ""
	I0429 20:14:08.143333   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.143344   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:14:08.143351   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:14:08.143408   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:14:08.190555   66615 cri.go:89] found id: ""
	I0429 20:14:08.190585   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.190597   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:14:08.190609   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:14:08.190624   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:14:08.251830   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:14:08.251870   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:14:08.306512   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:14:08.306554   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:14:08.323258   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:14:08.323283   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:14:08.405539   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:14:08.405568   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:14:08.405583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0429 20:14:08.514288   66615 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0429 20:14:08.514344   66615 out.go:239] * 
	W0429 20:14:08.514431   66615 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.514465   66615 out.go:239] * 
	W0429 20:14:08.515399   66615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:14:08.518578   66615 out.go:177] 
	W0429 20:14:08.519725   66615 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.519782   66615 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0429 20:14:08.519816   66615 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0429 20:14:08.521068   66615 out.go:177] 
	
	
	==> CRI-O <==
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.336620712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714421650336596119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c456c37b-224c-4ea8-8d55-f2a6ea71dc4a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.337266900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7856d78b-6674-47d0-912b-5fe84a85ab54 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.337314078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7856d78b-6674-47d0-912b-5fe84a85ab54 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.337346480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7856d78b-6674-47d0-912b-5fe84a85ab54 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.371860751Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4df55793-c17e-4136-8451-814d07d68112 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.372026847Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4df55793-c17e-4136-8451-814d07d68112 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.373783469Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1df1d03e-a3d8-4252-b991-35b705a0bece name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.374292937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714421650374264385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1df1d03e-a3d8-4252-b991-35b705a0bece name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.375119996Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16c8b311-4924-42af-9afd-a38f2e2c3119 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.375190799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16c8b311-4924-42af-9afd-a38f2e2c3119 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.375231197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=16c8b311-4924-42af-9afd-a38f2e2c3119 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.411538612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e9b9fc59-97a5-4224-9c57-40281846355e name=/runtime.v1.RuntimeService/Version
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.411647237Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9b9fc59-97a5-4224-9c57-40281846355e name=/runtime.v1.RuntimeService/Version
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.412851186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ef4db4b-16b7-459a-8dc4-1db584a5306a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.413300382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714421650413279276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ef4db4b-16b7-459a-8dc4-1db584a5306a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.413814228Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99ea7600-63a0-4d11-968d-6e314824c883 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.413865401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99ea7600-63a0-4d11-968d-6e314824c883 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.413896560Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=99ea7600-63a0-4d11-968d-6e314824c883 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.453152389Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=372a367b-8d06-4525-a71a-06a53661f435 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.453250403Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=372a367b-8d06-4525-a71a-06a53661f435 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.454450554Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=420b0fd4-947f-4f18-a1b4-3f99ff6ebeb2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.454922246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714421650454891061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=420b0fd4-947f-4f18-a1b4-3f99ff6ebeb2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.455743029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21a1ec36-8ae3-482a-92be-c196a5ab643e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.455791038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21a1ec36-8ae3-482a-92be-c196a5ab643e name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:14:10 old-k8s-version-919612 crio[646]: time="2024-04-29 20:14:10.455829348Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=21a1ec36-8ae3-482a-92be-c196a5ab643e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr29 20:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052789] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046548] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.710890] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.577556] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.715602] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.063950] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.064197] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076631] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.231967] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.183078] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.301851] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[Apr29 20:06] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.070853] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.488329] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[ +10.271232] kauditd_printk_skb: 46 callbacks suppressed
	[Apr29 20:10] systemd-fstab-generator[4978]: Ignoring "noauto" option for root device
	[Apr29 20:12] systemd-fstab-generator[5259]: Ignoring "noauto" option for root device
	[  +0.075523] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:14:10 up 8 min,  0 users,  load average: 0.00, 0.09, 0.07
	Linux old-k8s-version-919612 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000a19e60)
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]: goroutine 163 [select]:
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c39ef0, 0x4f0ac20, 0xc000ad3b30, 0x1, 0xc0001000c0)
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000b5a2a0, 0xc0001000c0)
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b06bc0, 0xc000a57b20)
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 29 20:14:08 old-k8s-version-919612 kubelet[5436]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 29 20:14:08 old-k8s-version-919612 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 29 20:14:08 old-k8s-version-919612 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 29 20:14:09 old-k8s-version-919612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 29 20:14:09 old-k8s-version-919612 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 29 20:14:09 old-k8s-version-919612 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 29 20:14:09 old-k8s-version-919612 kubelet[5510]: I0429 20:14:09.430315    5510 server.go:416] Version: v1.20.0
	Apr 29 20:14:09 old-k8s-version-919612 kubelet[5510]: I0429 20:14:09.430734    5510 server.go:837] Client rotation is on, will bootstrap in background
	Apr 29 20:14:09 old-k8s-version-919612 kubelet[5510]: I0429 20:14:09.434732    5510 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 29 20:14:09 old-k8s-version-919612 kubelet[5510]: W0429 20:14:09.436799    5510 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 29 20:14:09 old-k8s-version-919612 kubelet[5510]: I0429 20:14:09.438922    5510 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-919612 -n old-k8s-version-919612
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 2 (251.043005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-919612" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (722.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143: exit status 3 (3.167889262s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:02:36.234487   66765 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.106:22: connect: no route to host
	E0429 20:02:36.234508   66765 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.106:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-866143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-866143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154280447s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.106:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-866143 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143: exit status 3 (3.061663758s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 20:02:45.450663   66829 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.106:22: connect: no route to host
	E0429 20:02:45.450687   66829 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.106:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-866143" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-29 20:19:57.772406674 +0000 UTC m=+6047.419781805
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-866143 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-866143 logs -n 25: (2.229037131s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:55 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| ssh     | cert-options-437743 ssh                                | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-437743 -- sudo                         | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-437743                                 | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-161370            | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-456788             | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-193781 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | disable-driver-mounts-193781                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 20:00 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-866143  | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-161370                 | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-919612        | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-456788                  | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:01 UTC | 29 Apr 24 20:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-919612             | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-866143       | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:10 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:02:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:02:45.502823   66875 out.go:291] Setting OutFile to fd 1 ...
	I0429 20:02:45.503073   66875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:45.503084   66875 out.go:304] Setting ErrFile to fd 2...
	I0429 20:02:45.503089   66875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:45.503272   66875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 20:02:45.503808   66875 out.go:298] Setting JSON to false
	I0429 20:02:45.504681   66875 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6263,"bootTime":1714414702,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 20:02:45.504736   66875 start.go:139] virtualization: kvm guest
	I0429 20:02:45.507344   66875 out.go:177] * [default-k8s-diff-port-866143] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 20:02:45.508715   66875 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:02:45.508745   66875 notify.go:220] Checking for updates...
	I0429 20:02:45.510093   66875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:02:45.512200   66875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:02:45.513622   66875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:02:45.514915   66875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 20:02:45.516228   66875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:02:45.517923   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:02:45.518366   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:45.518446   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:45.533484   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I0429 20:02:45.533901   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:45.534427   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:02:45.534448   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:45.534822   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:45.535013   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:02:45.535292   66875 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:02:45.535595   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:45.535639   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:45.551065   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0429 20:02:45.551469   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:45.551906   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:02:45.551928   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:45.552239   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:45.552451   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:02:45.584714   66875 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 20:02:45.586089   66875 start.go:297] selected driver: kvm2
	I0429 20:02:45.586117   66875 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:45.586250   66875 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:02:45.587043   66875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:45.587136   66875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 20:02:45.601799   66875 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 20:02:45.602171   66875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:02:45.602246   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:02:45.602265   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:02:45.602323   66875 start.go:340] cluster config:
	{Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:45.602444   66875 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:45.605081   66875 out.go:177] * Starting "default-k8s-diff-port-866143" primary control-plane node in "default-k8s-diff-port-866143" cluster
	I0429 20:02:42.794291   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:45.866333   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:45.606536   66875 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:02:45.606590   66875 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 20:02:45.606602   66875 cache.go:56] Caching tarball of preloaded images
	I0429 20:02:45.606687   66875 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 20:02:45.606704   66875 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 20:02:45.606799   66875 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:02:45.606986   66875 start.go:360] acquireMachinesLock for default-k8s-diff-port-866143: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:02:51.946332   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:55.018269   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:01.098329   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:04.170389   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:10.250316   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:13.322292   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:19.402290   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:22.474356   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:28.554348   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:31.626416   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:37.706282   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:40.778321   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:46.858318   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:49.930321   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:56.010331   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:59.082336   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:05.162299   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:08.234328   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:14.314352   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:17.386337   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:23.466350   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:26.538284   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:32.618297   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:35.690319   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:41.770372   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:44.842280   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:50.922320   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:53.994336   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:00.074389   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:03.146353   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:09.226369   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:12.298407   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:15.302828   66218 start.go:364] duration metric: took 4m7.483402316s to acquireMachinesLock for "no-preload-456788"
	I0429 20:05:15.302889   66218 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:15.302896   66218 fix.go:54] fixHost starting: 
	I0429 20:05:15.303301   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:15.303337   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:15.319582   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I0429 20:05:15.320057   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:15.320597   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:05:15.320620   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:15.321017   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:15.321272   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:15.321472   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:05:15.323137   66218 fix.go:112] recreateIfNeeded on no-preload-456788: state=Stopped err=<nil>
	I0429 20:05:15.323171   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	W0429 20:05:15.323346   66218 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:15.325520   66218 out.go:177] * Restarting existing kvm2 VM for "no-preload-456788" ...
	I0429 20:05:15.327122   66218 main.go:141] libmachine: (no-preload-456788) Calling .Start
	I0429 20:05:15.327314   66218 main.go:141] libmachine: (no-preload-456788) Ensuring networks are active...
	I0429 20:05:15.328136   66218 main.go:141] libmachine: (no-preload-456788) Ensuring network default is active
	I0429 20:05:15.328437   66218 main.go:141] libmachine: (no-preload-456788) Ensuring network mk-no-preload-456788 is active
	I0429 20:05:15.328771   66218 main.go:141] libmachine: (no-preload-456788) Getting domain xml...
	I0429 20:05:15.329442   66218 main.go:141] libmachine: (no-preload-456788) Creating domain...
	I0429 20:05:16.534970   66218 main.go:141] libmachine: (no-preload-456788) Waiting to get IP...
	I0429 20:05:16.536019   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:16.536375   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:16.536444   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:16.536369   67416 retry.go:31] will retry after 240.743093ms: waiting for machine to come up
	I0429 20:05:16.779123   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:16.779623   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:16.779659   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:16.779558   67416 retry.go:31] will retry after 355.595109ms: waiting for machine to come up
	I0429 20:05:17.137145   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:17.137512   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:17.137542   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:17.137480   67416 retry.go:31] will retry after 347.905643ms: waiting for machine to come up
	I0429 20:05:17.487174   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:17.487566   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:17.487597   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:17.487543   67416 retry.go:31] will retry after 547.016094ms: waiting for machine to come up
	I0429 20:05:15.300221   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:15.300278   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:05:15.300613   65980 buildroot.go:166] provisioning hostname "embed-certs-161370"
	I0429 20:05:15.300652   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:05:15.300910   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:05:15.302677   65980 machine.go:97] duration metric: took 4m37.41104152s to provisionDockerMachine
	I0429 20:05:15.302722   65980 fix.go:56] duration metric: took 4m37.432092484s for fixHost
	I0429 20:05:15.302728   65980 start.go:83] releasing machines lock for "embed-certs-161370", held for 4m37.432113341s
	W0429 20:05:15.302753   65980 start.go:713] error starting host: provision: host is not running
	W0429 20:05:15.302871   65980 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0429 20:05:15.302882   65980 start.go:728] Will try again in 5 seconds ...
	I0429 20:05:18.036617   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:18.037042   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:18.037104   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:18.037025   67416 retry.go:31] will retry after 465.100134ms: waiting for machine to come up
	I0429 20:05:18.503846   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:18.504326   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:18.504352   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:18.504283   67416 retry.go:31] will retry after 672.007195ms: waiting for machine to come up
	I0429 20:05:19.178173   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:19.178570   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:19.178604   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:19.178516   67416 retry.go:31] will retry after 744.052058ms: waiting for machine to come up
	I0429 20:05:19.924561   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:19.925029   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:19.925060   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:19.925002   67416 retry.go:31] will retry after 1.06511003s: waiting for machine to come up
	I0429 20:05:20.991584   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:20.992015   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:20.992046   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:20.991980   67416 retry.go:31] will retry after 1.677065765s: waiting for machine to come up
	I0429 20:05:22.671760   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:22.672123   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:22.672149   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:22.672085   67416 retry.go:31] will retry after 1.979191189s: waiting for machine to come up
	I0429 20:05:20.303964   65980 start.go:360] acquireMachinesLock for embed-certs-161370: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:05:24.654246   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:24.654711   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:24.654735   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:24.654663   67416 retry.go:31] will retry after 1.839551716s: waiting for machine to come up
	I0429 20:05:26.496511   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:26.496982   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:26.497017   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:26.496939   67416 retry.go:31] will retry after 3.505979368s: waiting for machine to come up
	I0429 20:05:30.006590   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:30.006916   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:30.006951   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:30.006871   67416 retry.go:31] will retry after 3.811785899s: waiting for machine to come up
	I0429 20:05:35.155600   66615 start.go:364] duration metric: took 3m25.093405289s to acquireMachinesLock for "old-k8s-version-919612"
	I0429 20:05:35.155655   66615 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:35.155661   66615 fix.go:54] fixHost starting: 
	I0429 20:05:35.155999   66615 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:35.156034   66615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:35.173332   66615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0429 20:05:35.173754   66615 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:35.174261   66615 main.go:141] libmachine: Using API Version  1
	I0429 20:05:35.174294   66615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:35.174602   66615 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:35.174797   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:35.174987   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetState
	I0429 20:05:35.176453   66615 fix.go:112] recreateIfNeeded on old-k8s-version-919612: state=Stopped err=<nil>
	I0429 20:05:35.176478   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	W0429 20:05:35.176647   66615 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:35.178966   66615 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-919612" ...
	I0429 20:05:33.823293   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.823787   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has current primary IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.823806   66218 main.go:141] libmachine: (no-preload-456788) Found IP for machine: 192.168.39.235
	I0429 20:05:33.823830   66218 main.go:141] libmachine: (no-preload-456788) Reserving static IP address...
	I0429 20:05:33.824243   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "no-preload-456788", mac: "52:54:00:15:ae:18", ip: "192.168.39.235"} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.824279   66218 main.go:141] libmachine: (no-preload-456788) DBG | skip adding static IP to network mk-no-preload-456788 - found existing host DHCP lease matching {name: "no-preload-456788", mac: "52:54:00:15:ae:18", ip: "192.168.39.235"}
	I0429 20:05:33.824293   66218 main.go:141] libmachine: (no-preload-456788) Reserved static IP address: 192.168.39.235
	I0429 20:05:33.824308   66218 main.go:141] libmachine: (no-preload-456788) Waiting for SSH to be available...
	I0429 20:05:33.824323   66218 main.go:141] libmachine: (no-preload-456788) DBG | Getting to WaitForSSH function...
	I0429 20:05:33.826371   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.826678   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.826711   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.826808   66218 main.go:141] libmachine: (no-preload-456788) DBG | Using SSH client type: external
	I0429 20:05:33.826836   66218 main.go:141] libmachine: (no-preload-456788) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa (-rw-------)
	I0429 20:05:33.826863   66218 main.go:141] libmachine: (no-preload-456788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:33.826876   66218 main.go:141] libmachine: (no-preload-456788) DBG | About to run SSH command:
	I0429 20:05:33.826887   66218 main.go:141] libmachine: (no-preload-456788) DBG | exit 0
	I0429 20:05:33.954275   66218 main.go:141] libmachine: (no-preload-456788) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:33.954631   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetConfigRaw
	I0429 20:05:33.955387   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:33.957827   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.958210   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.958241   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.958510   66218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/config.json ...
	I0429 20:05:33.958707   66218 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:33.958726   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:33.958952   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:33.961236   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.961535   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.961564   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.961692   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:33.961857   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:33.962015   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:33.962163   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:33.962339   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:33.962522   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:33.962533   66218 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:34.070746   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:34.070777   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.071037   66218 buildroot.go:166] provisioning hostname "no-preload-456788"
	I0429 20:05:34.071062   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.071305   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.073680   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.074016   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.074043   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.074203   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.074374   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.074513   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.074612   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.074743   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.074946   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.074960   66218 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-456788 && echo "no-preload-456788" | sudo tee /etc/hostname
	I0429 20:05:34.198256   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-456788
	
	I0429 20:05:34.198286   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.201126   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.201482   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.201521   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.201710   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.201914   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.202055   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.202219   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.202361   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.202549   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.202573   66218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-456788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-456788/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-456788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:34.324678   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:34.324710   66218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:34.324732   66218 buildroot.go:174] setting up certificates
	I0429 20:05:34.324744   66218 provision.go:84] configureAuth start
	I0429 20:05:34.324756   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.325032   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:34.327623   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.328010   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.328040   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.328149   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.330359   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.330679   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.330711   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.330811   66218 provision.go:143] copyHostCerts
	I0429 20:05:34.330865   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:34.330878   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:34.330939   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:34.331023   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:34.331031   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:34.331054   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:34.331111   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:34.331119   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:34.331148   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:34.331231   66218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.no-preload-456788 san=[127.0.0.1 192.168.39.235 localhost minikube no-preload-456788]
	I0429 20:05:34.444358   66218 provision.go:177] copyRemoteCerts
	I0429 20:05:34.444420   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:34.444445   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.447129   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.447432   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.447466   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.447623   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.447833   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.447999   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.448129   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:34.533465   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:34.561724   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:34.589229   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 20:05:34.617451   66218 provision.go:87] duration metric: took 292.691614ms to configureAuth
	I0429 20:05:34.617491   66218 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:34.617733   66218 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:05:34.617821   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.620628   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.621016   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.621047   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.621257   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.621532   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.621718   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.621892   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.622085   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.622289   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.622305   66218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:34.908031   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:34.908064   66218 machine.go:97] duration metric: took 949.343369ms to provisionDockerMachine
	I0429 20:05:34.908077   66218 start.go:293] postStartSetup for "no-preload-456788" (driver="kvm2")
	I0429 20:05:34.908091   66218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:34.908107   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:34.908452   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:34.908489   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.911574   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.912026   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.912054   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.912219   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.912428   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.912616   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.912743   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:34.997625   66218 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:35.002661   66218 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:35.002687   66218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:35.002753   66218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:35.002822   66218 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:35.002906   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:35.013292   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:35.039830   66218 start.go:296] duration metric: took 131.741312ms for postStartSetup
	I0429 20:05:35.039865   66218 fix.go:56] duration metric: took 19.736969384s for fixHost
	I0429 20:05:35.039905   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.042526   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.042877   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.042912   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.043032   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.043239   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.043416   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.043534   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.043696   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:35.043848   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:35.043858   66218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:05:35.155463   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421135.123583649
	
	I0429 20:05:35.155485   66218 fix.go:216] guest clock: 1714421135.123583649
	I0429 20:05:35.155496   66218 fix.go:229] Guest: 2024-04-29 20:05:35.123583649 +0000 UTC Remote: 2024-04-29 20:05:35.039869068 +0000 UTC m=+267.371683880 (delta=83.714581ms)
	I0429 20:05:35.155514   66218 fix.go:200] guest clock delta is within tolerance: 83.714581ms
	I0429 20:05:35.155519   66218 start.go:83] releasing machines lock for "no-preload-456788", held for 19.852645936s
	I0429 20:05:35.155544   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.155881   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:35.158682   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.159051   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.159070   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.159205   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.159793   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.159987   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.160077   66218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:35.160117   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.160216   66218 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:35.160244   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.162788   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163016   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163226   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.163250   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163372   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.163449   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.163475   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163537   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.163621   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.163723   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.163791   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.163873   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:35.163920   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.164064   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:35.248518   66218 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:35.271479   66218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:35.423324   66218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:35.430371   66218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:35.430445   66218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:35.447860   66218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:35.447886   66218 start.go:494] detecting cgroup driver to use...
	I0429 20:05:35.447949   66218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:35.464102   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:35.479069   66218 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:35.479158   66218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:35.493800   66218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:35.509284   66218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:35.627273   66218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:35.785213   66218 docker.go:233] disabling docker service ...
	I0429 20:05:35.785300   66218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:35.803584   66218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:35.818874   66218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:35.984309   66218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:36.128841   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:36.148237   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:36.172144   66218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:05:36.172243   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.191274   66218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:36.191353   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.209656   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.224474   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.238802   66218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:36.252515   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.264522   66218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.286496   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.299127   66218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:36.310702   66218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:36.310760   66218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:36.336226   66218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:36.348617   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:36.474875   66218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:36.619181   66218 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:36.619257   66218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:36.625401   66218 start.go:562] Will wait 60s for crictl version
	I0429 20:05:36.625475   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:36.630232   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:36.667005   66218 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:36.667093   66218 ssh_runner.go:195] Run: crio --version
	I0429 20:05:36.699758   66218 ssh_runner.go:195] Run: crio --version
	I0429 20:05:36.734406   66218 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:05:36.735853   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:36.738683   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:36.739019   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:36.739049   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:36.739310   66218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:36.745227   66218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:36.760124   66218 kubeadm.go:877] updating cluster {Name:no-preload-456788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:36.760238   66218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:05:36.760278   66218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:36.801389   66218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:05:36.801414   66218 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:05:36.801470   66218 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:36.801508   66218 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:36.801524   66218 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:36.801559   66218 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:36.801580   66218 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.801632   66218 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0429 20:05:36.801687   66218 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:36.801688   66218 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:36.803301   66218 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:36.803300   66218 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.803308   66218 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:36.803382   66218 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:36.956976   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.964957   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.022376   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.025860   66218 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0429 20:05:37.025893   66218 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0429 20:05:37.025915   66218 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.025924   66218 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:37.025962   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.025964   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.072629   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.072688   66218 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0429 20:05:37.072713   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:37.072741   66218 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.072791   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.118610   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0429 20:05:37.118704   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.118720   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.128364   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0429 20:05:37.128474   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:37.161350   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0429 20:05:37.165670   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0429 20:05:37.165693   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0429 20:05:37.165710   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.165754   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.165762   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0429 20:05:37.165779   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:37.167440   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:37.174173   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:37.180560   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:37.715733   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:35.180393   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .Start
	I0429 20:05:35.180576   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring networks are active...
	I0429 20:05:35.181281   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network default is active
	I0429 20:05:35.181678   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network mk-old-k8s-version-919612 is active
	I0429 20:05:35.182102   66615 main.go:141] libmachine: (old-k8s-version-919612) Getting domain xml...
	I0429 20:05:35.182867   66615 main.go:141] libmachine: (old-k8s-version-919612) Creating domain...
	I0429 20:05:36.459478   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting to get IP...
	I0429 20:05:36.460301   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.460751   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.460817   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.460706   67552 retry.go:31] will retry after 280.48781ms: waiting for machine to come up
	I0429 20:05:36.743188   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.743630   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.743658   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.743591   67552 retry.go:31] will retry after 326.238132ms: waiting for machine to come up
	I0429 20:05:37.071146   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.071576   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.071609   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.071527   67552 retry.go:31] will retry after 380.72234ms: waiting for machine to come up
	I0429 20:05:37.453967   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.454435   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.454464   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.454385   67552 retry.go:31] will retry after 593.303053ms: waiting for machine to come up
	I0429 20:05:38.049072   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.049555   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.049587   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.049500   67552 retry.go:31] will retry after 694.752524ms: waiting for machine to come up
	I0429 20:05:38.746542   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.747034   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.747065   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.747002   67552 retry.go:31] will retry after 860.161186ms: waiting for machine to come up
	I0429 20:05:39.609098   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:39.609601   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:39.609634   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:39.609544   67552 retry.go:31] will retry after 726.889681ms: waiting for machine to come up
	I0429 20:05:39.327634   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.161845487s)
	I0429 20:05:39.327673   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.161870572s)
	I0429 20:05:39.327710   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0429 20:05:39.327675   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0429 20:05:39.327737   66218 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:39.327748   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0: (2.16027023s)
	I0429 20:05:39.327805   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:39.327811   66218 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0429 20:05:39.327821   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0: (2.153617598s)
	I0429 20:05:39.327846   66218 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:39.327878   66218 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0429 20:05:39.327891   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (2.147303278s)
	I0429 20:05:39.327910   66218 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:39.327929   66218 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0429 20:05:39.327944   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327954   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.612190652s)
	I0429 20:05:39.327960   66218 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:39.327984   66218 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0429 20:05:39.328035   66218 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:39.328061   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327991   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327886   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.333555   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:39.343257   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:41.263038   66218 ssh_runner.go:235] Completed: which crictl: (1.934889703s)
	I0429 20:05:41.263103   66218 ssh_runner.go:235] Completed: which crictl: (1.93491368s)
	I0429 20:05:41.263121   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:41.263132   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.935299869s)
	I0429 20:05:41.263153   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0: (1.929577799s)
	I0429 20:05:41.263155   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0429 20:05:41.263217   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.919934007s)
	I0429 20:05:41.263221   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0429 20:05:41.263248   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:41.263251   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 20:05:41.263290   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:41.263301   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:41.263343   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:41.263159   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:40.338292   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:40.338823   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:40.338864   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:40.338757   67552 retry.go:31] will retry after 1.310400969s: waiting for machine to come up
	I0429 20:05:41.651107   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:41.651625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:41.651670   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:41.651575   67552 retry.go:31] will retry after 1.769756679s: waiting for machine to come up
	I0429 20:05:43.423326   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:43.423829   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:43.423869   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:43.423790   67552 retry.go:31] will retry after 1.748237944s: waiting for machine to come up
	I0429 20:05:44.084051   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.820737476s)
	I0429 20:05:44.084139   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.820774517s)
	I0429 20:05:44.084167   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.820842646s)
	I0429 20:05:44.084186   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0429 20:05:44.084142   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0429 20:05:44.084202   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0429 20:05:44.084211   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:44.084065   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0: (2.820919138s)
	I0429 20:05:44.084244   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0429 20:05:44.084260   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:44.084272   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (2.82086612s)
	I0429 20:05:44.084305   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0429 20:05:44.084331   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:44.084375   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:44.091151   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0429 20:05:46.553783   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.469493694s)
	I0429 20:05:46.553882   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0429 20:05:46.553912   66218 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:46.553837   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.469479626s)
	I0429 20:05:46.553973   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:46.553975   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0429 20:05:47.510118   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0429 20:05:47.510169   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:47.510212   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:45.173157   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:45.173617   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:45.173642   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:45.173563   67552 retry.go:31] will retry after 2.784243469s: waiting for machine to come up
	I0429 20:05:47.959942   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:47.960473   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:47.960508   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:47.960410   67552 retry.go:31] will retry after 3.046526969s: waiting for machine to come up
	I0429 20:05:49.069163   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.55892426s)
	I0429 20:05:49.069202   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0429 20:05:49.069231   66218 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:49.069276   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:51.007941   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:51.008230   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:51.008253   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:51.008213   67552 retry.go:31] will retry after 4.220985004s: waiting for machine to come up
	I0429 20:05:56.579154   66875 start.go:364] duration metric: took 3m10.972135355s to acquireMachinesLock for "default-k8s-diff-port-866143"
	I0429 20:05:56.579208   66875 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:56.579230   66875 fix.go:54] fixHost starting: 
	I0429 20:05:56.579615   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:56.579655   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:56.599113   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0429 20:05:56.599627   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:56.600173   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:05:56.600198   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:56.600488   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:56.600694   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:05:56.600849   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:05:56.602291   66875 fix.go:112] recreateIfNeeded on default-k8s-diff-port-866143: state=Stopped err=<nil>
	I0429 20:05:56.602315   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	W0429 20:05:56.602456   66875 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:56.605006   66875 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-866143" ...
	I0429 20:05:53.062693   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.993382111s)
	I0429 20:05:53.062730   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0429 20:05:53.062757   66218 cache_images.go:123] Successfully loaded all cached images
	I0429 20:05:53.062762   66218 cache_images.go:92] duration metric: took 16.261337424s to LoadCachedImages
	I0429 20:05:53.062770   66218 kubeadm.go:928] updating node { 192.168.39.235 8443 v1.30.0 crio true true} ...
	I0429 20:05:53.062893   66218 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-456788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:05:53.062994   66218 ssh_runner.go:195] Run: crio config
	I0429 20:05:53.116289   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:05:53.116311   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:05:53.116322   66218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:05:53.116340   66218 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.235 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-456788 NodeName:no-preload-456788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:05:53.116516   66218 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-456788"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:05:53.116592   66218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:05:53.128095   66218 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:05:53.128174   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:05:53.138786   66218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0429 20:05:53.158151   66218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:05:53.176440   66218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:05:53.195348   66218 ssh_runner.go:195] Run: grep 192.168.39.235	control-plane.minikube.internal$ /etc/hosts
	I0429 20:05:53.199408   66218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:53.212407   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:53.349752   66218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:05:53.368381   66218 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788 for IP: 192.168.39.235
	I0429 20:05:53.368401   66218 certs.go:194] generating shared ca certs ...
	I0429 20:05:53.368415   66218 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:05:53.368565   66218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:05:53.368609   66218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:05:53.368619   66218 certs.go:256] generating profile certs ...
	I0429 20:05:53.368697   66218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.key
	I0429 20:05:53.368751   66218 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.key.5f45c78c
	I0429 20:05:53.368785   66218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.key
	I0429 20:05:53.368889   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:05:53.368915   66218 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:05:53.368921   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:05:53.368944   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:05:53.368972   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:05:53.368993   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:05:53.369029   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:53.369624   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:05:53.428403   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:05:53.467050   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:05:53.501319   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:05:53.528828   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:05:53.553742   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:05:53.582308   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:05:53.609324   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:05:53.636730   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:05:53.663388   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:05:53.690949   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:05:53.717113   66218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:05:53.735784   66218 ssh_runner.go:195] Run: openssl version
	I0429 20:05:53.741879   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:05:53.752930   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.757811   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.757861   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.763798   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:05:53.775019   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:05:53.786654   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.791457   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.791500   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.797608   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:05:53.809139   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:05:53.820927   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.826384   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.826441   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.832798   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:05:53.844300   66218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:05:53.849139   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:05:53.855556   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:05:53.861716   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:05:53.868390   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:05:53.874740   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:05:53.881101   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:05:53.887688   66218 kubeadm.go:391] StartCluster: {Name:no-preload-456788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:05:53.887807   66218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:05:53.887858   66218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:05:53.930491   66218 cri.go:89] found id: ""
	I0429 20:05:53.930563   66218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:05:53.941016   66218 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:05:53.941037   66218 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:05:53.941042   66218 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:05:53.941081   66218 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:05:53.950651   66218 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:05:53.951536   66218 kubeconfig.go:125] found "no-preload-456788" server: "https://192.168.39.235:8443"
	I0429 20:05:53.953451   66218 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:05:53.962857   66218 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.235
	I0429 20:05:53.962879   66218 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:05:53.962889   66218 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:05:53.962932   66218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:05:54.000841   66218 cri.go:89] found id: ""
	I0429 20:05:54.000909   66218 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:05:54.018221   66218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:05:54.028524   66218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:05:54.028556   66218 kubeadm.go:156] found existing configuration files:
	
	I0429 20:05:54.028600   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:05:54.038717   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:05:54.038807   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:05:54.049350   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:05:54.059483   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:05:54.059548   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:05:54.069518   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:05:54.078900   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:05:54.078953   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:05:54.088652   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:05:54.098545   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:05:54.098596   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:05:54.108351   66218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:05:54.118645   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:54.236330   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:55.859211   66218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.622843221s)
	I0429 20:05:55.859254   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.075993   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.175176   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.274249   66218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:05:56.274469   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:56.775315   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:57.274840   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:57.315656   66218 api_server.go:72] duration metric: took 1.041421989s to wait for apiserver process to appear ...
	I0429 20:05:57.315697   66218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:05:57.315719   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:05:57.316669   66218 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": dial tcp 192.168.39.235:8443: connect: connection refused
	I0429 20:05:55.230409   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230860   66615 main.go:141] libmachine: (old-k8s-version-919612) Found IP for machine: 192.168.72.240
	I0429 20:05:55.230889   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has current primary IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230898   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserving static IP address...
	I0429 20:05:55.231252   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.231287   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | skip adding static IP to network mk-old-k8s-version-919612 - found existing host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"}
	I0429 20:05:55.231305   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserved static IP address: 192.168.72.240
	I0429 20:05:55.231319   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting for SSH to be available...
	I0429 20:05:55.231335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Getting to WaitForSSH function...
	I0429 20:05:55.233198   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.233500   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH client type: external
	I0429 20:05:55.233671   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa (-rw-------)
	I0429 20:05:55.233706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:55.233730   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | About to run SSH command:
	I0429 20:05:55.233747   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | exit 0
	I0429 20:05:55.354242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:55.354584   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetConfigRaw
	I0429 20:05:55.355221   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.357791   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.358276   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358564   66615 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json ...
	I0429 20:05:55.358786   66615 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:55.358807   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:55.359037   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.361536   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.361861   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.361885   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.362048   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.362247   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362416   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362568   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.362733   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.362930   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.362943   66615 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:55.462364   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:55.462388   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462632   66615 buildroot.go:166] provisioning hostname "old-k8s-version-919612"
	I0429 20:05:55.462669   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462852   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.465335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465674   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.465706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465836   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.466034   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466208   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466366   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.466525   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.466729   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.466745   66615 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-919612 && echo "old-k8s-version-919612" | sudo tee /etc/hostname
	I0429 20:05:55.596239   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-919612
	
	I0429 20:05:55.596281   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.599221   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599575   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.599606   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599770   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.599970   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600122   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600316   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.600498   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.600667   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.600690   66615 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-919612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-919612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-919612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:55.716588   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:55.716621   66615 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:55.716647   66615 buildroot.go:174] setting up certificates
	I0429 20:05:55.716658   66615 provision.go:84] configureAuth start
	I0429 20:05:55.716671   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.716956   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.719569   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.719919   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.719956   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.720095   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.722484   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.722876   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.722912   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.723036   66615 provision.go:143] copyHostCerts
	I0429 20:05:55.723087   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:55.723097   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:55.723158   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:55.723253   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:55.723262   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:55.723280   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:55.723336   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:55.723342   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:55.723358   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:55.723404   66615 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-919612 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-919612]
	I0429 20:05:55.878639   66615 provision.go:177] copyRemoteCerts
	I0429 20:05:55.878724   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:55.878750   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.881746   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882306   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.882358   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882540   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.882743   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.882986   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.883139   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:55.973158   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:56.003094   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0429 20:05:56.031670   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:56.059049   66615 provision.go:87] duration metric: took 342.376371ms to configureAuth
	I0429 20:05:56.059091   66615 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:56.059335   66615 config.go:182] Loaded profile config "old-k8s-version-919612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 20:05:56.059441   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.062416   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.062887   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.062921   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.063082   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.063322   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063521   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063688   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.063901   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.064066   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.064082   66615 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:56.342484   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:56.342511   66615 machine.go:97] duration metric: took 983.711183ms to provisionDockerMachine
	I0429 20:05:56.342525   66615 start.go:293] postStartSetup for "old-k8s-version-919612" (driver="kvm2")
	I0429 20:05:56.342540   66615 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:56.342589   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.342931   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:56.342983   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.345399   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345710   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.345731   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345869   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.346047   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.346233   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.346418   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.431189   66615 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:56.435878   66615 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:56.435903   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:56.435983   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:56.436086   66615 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:56.436170   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:56.445841   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:56.472683   66615 start.go:296] duration metric: took 130.146591ms for postStartSetup
	I0429 20:05:56.472715   66615 fix.go:56] duration metric: took 21.31705375s for fixHost
	I0429 20:05:56.472736   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.475127   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.475492   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475624   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.475857   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476055   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476211   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.476378   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.476536   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.476547   66615 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:05:56.578999   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421156.548872445
	
	I0429 20:05:56.579028   66615 fix.go:216] guest clock: 1714421156.548872445
	I0429 20:05:56.579040   66615 fix.go:229] Guest: 2024-04-29 20:05:56.548872445 +0000 UTC Remote: 2024-04-29 20:05:56.472718546 +0000 UTC m=+226.572342220 (delta=76.153899ms)
	I0429 20:05:56.579068   66615 fix.go:200] guest clock delta is within tolerance: 76.153899ms
	I0429 20:05:56.579076   66615 start.go:83] releasing machines lock for "old-k8s-version-919612", held for 21.423436193s
	I0429 20:05:56.579111   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.579407   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:56.582338   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582673   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.582711   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582856   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583365   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583543   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583625   66615 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:56.583667   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.583765   66615 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:56.583805   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.586263   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586552   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586618   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586656   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586891   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.586953   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586989   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.587060   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587170   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.587240   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587310   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587458   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587462   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.587600   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.672678   66615 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:56.694175   66615 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:56.859009   66615 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:56.865723   66615 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:56.865798   66615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:56.885686   66615 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:56.885714   66615 start.go:494] detecting cgroup driver to use...
	I0429 20:05:56.885805   66615 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:56.909082   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:56.931583   66615 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:56.931646   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:56.953524   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:56.976170   66615 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:57.122813   66615 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:57.315725   66615 docker.go:233] disabling docker service ...
	I0429 20:05:57.315786   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:57.333927   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:57.350022   66615 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:57.525787   66615 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:57.685802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:57.703246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:57.730558   66615 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 20:05:57.730618   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.747081   66615 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:57.747133   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.760168   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.773553   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.787609   66615 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:57.800532   66615 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:57.813582   66615 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:57.813669   66615 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:57.832224   66615 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:57.844783   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:57.991666   66615 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:58.183635   66615 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:58.183718   66615 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:58.189441   66615 start.go:562] Will wait 60s for crictl version
	I0429 20:05:58.189509   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:05:58.194049   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:58.250751   66615 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:58.250839   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.292368   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.336121   66615 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0429 20:05:58.337389   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:58.340707   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341125   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:58.341153   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341387   66615 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:58.346434   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:58.361081   66615 kubeadm.go:877] updating cluster {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:58.361242   66615 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 20:05:58.361307   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:58.414304   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:05:58.414366   66615 ssh_runner.go:195] Run: which lz4
	I0429 20:05:58.420584   66615 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:05:58.425682   66615 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:05:58.425712   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0429 20:05:56.606748   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Start
	I0429 20:05:56.606929   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring networks are active...
	I0429 20:05:56.607627   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring network default is active
	I0429 20:05:56.608028   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring network mk-default-k8s-diff-port-866143 is active
	I0429 20:05:56.608557   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Getting domain xml...
	I0429 20:05:56.609325   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Creating domain...
	I0429 20:05:57.911657   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting to get IP...
	I0429 20:05:57.912705   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:57.913118   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:57.913211   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:57.913104   67743 retry.go:31] will retry after 298.590493ms: waiting for machine to come up
	I0429 20:05:58.213730   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.214424   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.214578   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:58.214487   67743 retry.go:31] will retry after 375.439886ms: waiting for machine to come up
	I0429 20:05:58.592145   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.592671   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.592700   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:58.592626   67743 retry.go:31] will retry after 432.890106ms: waiting for machine to come up
	I0429 20:05:59.027344   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.027782   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.027812   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:59.027732   67743 retry.go:31] will retry after 547.616894ms: waiting for machine to come up
	I0429 20:05:59.576555   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.577116   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.577140   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:59.577058   67743 retry.go:31] will retry after 662.088326ms: waiting for machine to come up
	I0429 20:06:00.240907   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.241712   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.241744   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:00.241667   67743 retry.go:31] will retry after 691.874394ms: waiting for machine to come up
	I0429 20:05:57.816218   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.079778   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:01.079817   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:01.079832   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.112008   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:01.112043   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:01.316358   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.322401   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:01.322437   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:01.815974   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.825156   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:01.825219   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:02.316473   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:02.328725   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:02.328763   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:02.816674   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:02.822826   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:02.822866   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:03.315863   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:03.323314   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:03.323366   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:03.816529   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:03.822521   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:03.822556   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:04.316336   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:04.325750   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0429 20:06:04.337308   66218 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:04.337348   66218 api_server.go:131] duration metric: took 7.02164287s to wait for apiserver health ...
	I0429 20:06:04.337361   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:06:04.337370   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:04.505344   66218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:00.520217   66615 crio.go:462] duration metric: took 2.099664395s to copy over tarball
	I0429 20:06:00.520314   66615 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:04.082476   66615 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562128598s)
	I0429 20:06:04.082527   66615 crio.go:469] duration metric: took 3.562271241s to extract the tarball
	I0429 20:06:04.082538   66615 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:04.129338   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:04.177683   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:06:04.177709   66615 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:06:04.177762   66615 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.177798   66615 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.177817   66615 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.177834   66615 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.177835   66615 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.177783   66615 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.177897   66615 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0429 20:06:04.177972   66615 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179282   66615 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.179360   66615 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.179361   66615 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.179320   66615 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.179331   66615 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.179299   66615 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0429 20:06:04.323997   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376145   66615 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0429 20:06:04.376210   66615 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376261   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.381592   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.420565   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0429 20:06:04.440670   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0429 20:06:04.461763   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.499283   66615 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0429 20:06:04.499347   66615 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0429 20:06:04.499404   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513860   66615 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0429 20:06:04.513900   66615 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.513946   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513988   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0429 20:06:04.548990   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.556713   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.556942   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.556965   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0429 20:06:04.566227   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.598982   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.656930   66615 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0429 20:06:04.656980   66615 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.657038   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.724922   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0429 20:06:04.725179   66615 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0429 20:06:04.725218   66615 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.725262   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732375   66615 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0429 20:06:04.732429   66615 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.732482   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732492   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.732483   66615 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0429 20:06:04.732669   66615 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.732726   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.735419   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.739785   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.742496   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.834684   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0429 20:06:04.834754   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0429 20:06:04.834811   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0429 20:06:04.847076   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0429 20:06:00.935382   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.935935   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.935979   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:00.935902   67743 retry.go:31] will retry after 1.024898519s: waiting for machine to come up
	I0429 20:06:01.962446   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:01.963109   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:01.963140   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:01.963059   67743 retry.go:31] will retry after 1.19225855s: waiting for machine to come up
	I0429 20:06:03.157257   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:03.157781   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:03.157843   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:03.157738   67743 retry.go:31] will retry after 1.699779549s: waiting for machine to come up
	I0429 20:06:04.859190   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:04.859622   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:04.859670   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:04.859565   67743 retry.go:31] will retry after 2.307475318s: waiting for machine to come up
	I0429 20:06:04.671477   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:04.684650   66218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:04.718146   66218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:04.908181   66218 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:04.908213   66218 system_pods.go:61] "coredns-7db6d8ff4d-d4kwk" [215ff4b8-3ae5-49a7-8a9f-6acb4d176b93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:04.908223   66218 system_pods.go:61] "etcd-no-preload-456788" [3ec7e177-1b68-4bff-aa4d-803f5346e1be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:04.908231   66218 system_pods.go:61] "kube-apiserver-no-preload-456788" [5e8bf0b0-9669-4f0c-8da1-523589158b16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:04.908236   66218 system_pods.go:61] "kube-controller-manager-no-preload-456788" [515363f7-bde1-4ba7-a5a9-6779f673afaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:04.908240   66218 system_pods.go:61] "kube-proxy-slnph" [29f503bf-ce19-425c-8174-2b8e7b27a424] Running
	I0429 20:06:04.908253   66218 system_pods.go:61] "kube-scheduler-no-preload-456788" [4f394af0-6452-49dd-9770-7c6bfcff3936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:04.908258   66218 system_pods.go:61] "metrics-server-569cc877fc-6mpnm" [5f183615-a243-410a-a524-ebdaa65e6400] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:04.908262   66218 system_pods.go:61] "storage-provisioner" [f74a777d-a3d7-4682-bad0-44bb993a2d43] Running
	I0429 20:06:04.908270   66218 system_pods.go:74] duration metric: took 190.098153ms to wait for pod list to return data ...
	I0429 20:06:04.908278   66218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:05.212876   66218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:05.212913   66218 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:05.212929   66218 node_conditions.go:105] duration metric: took 304.645545ms to run NodePressure ...
	I0429 20:06:05.212950   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:05.913252   66218 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:05.928914   66218 kubeadm.go:733] kubelet initialised
	I0429 20:06:05.928947   66218 kubeadm.go:734] duration metric: took 15.668535ms waiting for restarted kubelet to initialise ...
	I0429 20:06:05.928957   66218 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:05.937357   66218 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:05.091766   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:05.269730   66615 cache_images.go:92] duration metric: took 1.092006107s to LoadCachedImages
	W0429 20:06:05.269839   66615 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0429 20:06:05.269857   66615 kubeadm.go:928] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I0429 20:06:05.269988   66615 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-919612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:05.270088   66615 ssh_runner.go:195] Run: crio config
	I0429 20:06:05.322439   66615 cni.go:84] Creating CNI manager for ""
	I0429 20:06:05.322471   66615 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:05.322486   66615 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:05.322522   66615 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-919612 NodeName:old-k8s-version-919612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 20:06:05.322746   66615 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-919612"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:05.322810   66615 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 20:06:05.340981   66615 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:05.341058   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:05.357048   66615 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0429 20:06:05.384352   66615 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:05.407887   66615 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0429 20:06:05.431531   66615 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:05.437567   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:05.457652   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:05.610358   66615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:05.641538   66615 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612 for IP: 192.168.72.240
	I0429 20:06:05.641568   66615 certs.go:194] generating shared ca certs ...
	I0429 20:06:05.641583   66615 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:05.641758   66615 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:05.641831   66615 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:05.641843   66615 certs.go:256] generating profile certs ...
	I0429 20:06:05.641948   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.key
	I0429 20:06:05.642020   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key.5df5e618
	I0429 20:06:05.642083   66615 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key
	I0429 20:06:05.642256   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:05.642304   66615 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:05.642325   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:05.642364   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:05.642401   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:05.642435   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:05.642489   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:05.643156   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:05.691350   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:05.734434   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:05.773056   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:05.819778   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 20:06:05.868256   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:05.911589   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:05.957714   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 20:06:06.002120   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:06.039736   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:06.079636   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:06.118317   66615 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:06.145932   66615 ssh_runner.go:195] Run: openssl version
	I0429 20:06:06.152970   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:06.166609   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.171939   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.172033   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.179153   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:06.193491   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:06.207800   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214803   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214876   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.222154   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:06.236908   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:06.254197   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260797   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260863   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.267635   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:06.282727   66615 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:06.289580   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:06.301014   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:06.310503   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:06.318708   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:06.325718   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:06.332690   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:06.339914   66615 kubeadm.go:391] StartCluster: {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:06.340012   66615 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:06.340069   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.391511   66615 cri.go:89] found id: ""
	I0429 20:06:06.391618   66615 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:06.408955   66615 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:06.408985   66615 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:06.408991   66615 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:06.409060   66615 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:06.425276   66615 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:06.426397   66615 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-919612" does not appear in /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:06.427298   66615 kubeconfig.go:62] /home/jenkins/minikube-integration/18774-7754/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-919612" cluster setting kubeconfig missing "old-k8s-version-919612" context setting]
	I0429 20:06:06.428287   66615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:06.429908   66615 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:06.443630   66615 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.240
	I0429 20:06:06.443674   66615 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:06.443686   66615 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:06.443753   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.486251   66615 cri.go:89] found id: ""
	I0429 20:06:06.486339   66615 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:06.507136   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:06.523798   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:06.523828   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:06.523887   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:06.536668   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:06.536735   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:06.547800   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:06.560435   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:06.560517   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:06.572227   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.582772   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:06.582825   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.594168   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:06.605940   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:06.606013   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:06.621829   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:06.637520   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:06.779910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:07.921143   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.141191032s)
	I0429 20:06:07.921178   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.172381   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.276243   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.398312   66615 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:08.398424   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:08.899388   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.399344   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.898731   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:07.168679   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:07.169214   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:07.169264   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:07.169146   67743 retry.go:31] will retry after 2.050354993s: waiting for machine to come up
	I0429 20:06:09.221915   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:09.222545   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:09.222581   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:09.222449   67743 retry.go:31] will retry after 2.544889222s: waiting for machine to come up
	I0429 20:06:07.947247   66218 pod_ready.go:102] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:10.449364   66218 pod_ready.go:102] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:10.943731   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:10.943754   66218 pod_ready.go:81] duration metric: took 5.006367348s for pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:10.943763   66218 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.453825   66218 pod_ready.go:92] pod "etcd-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.453853   66218 pod_ready.go:81] duration metric: took 1.510082371s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.453865   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.462971   66218 pod_ready.go:92] pod "kube-apiserver-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.462997   66218 pod_ready.go:81] duration metric: took 9.123374ms for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.463011   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.471032   66218 pod_ready.go:92] pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.471066   66218 pod_ready.go:81] duration metric: took 8.024113ms for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.471077   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-slnph" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.478671   66218 pod_ready.go:92] pod "kube-proxy-slnph" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.478695   66218 pod_ready.go:81] duration metric: took 7.609313ms for pod "kube-proxy-slnph" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.478706   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.542851   66218 pod_ready.go:92] pod "kube-scheduler-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.542875   66218 pod_ready.go:81] duration metric: took 64.16109ms for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.542888   66218 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:10.399055   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:10.898742   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.399250   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.898511   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.399301   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.899399   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.399242   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.899417   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.398526   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.898976   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.768576   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:11.768967   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:11.769003   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:11.768924   67743 retry.go:31] will retry after 3.829285986s: waiting for machine to come up
	I0429 20:06:17.032004   65980 start.go:364] duration metric: took 56.727982697s to acquireMachinesLock for "embed-certs-161370"
	I0429 20:06:17.032074   65980 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:06:17.032085   65980 fix.go:54] fixHost starting: 
	I0429 20:06:17.032452   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:17.032485   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:17.050767   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0429 20:06:17.051181   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:17.051655   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:06:17.051680   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:17.052002   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:17.052188   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:17.052363   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:06:17.053975   65980 fix.go:112] recreateIfNeeded on embed-certs-161370: state=Stopped err=<nil>
	I0429 20:06:17.054002   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	W0429 20:06:17.054167   65980 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:06:17.056054   65980 out.go:177] * Restarting existing kvm2 VM for "embed-certs-161370" ...
	I0429 20:06:14.550615   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:17.050288   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:17.057452   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Start
	I0429 20:06:17.057630   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring networks are active...
	I0429 20:06:17.058381   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring network default is active
	I0429 20:06:17.058680   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring network mk-embed-certs-161370 is active
	I0429 20:06:17.059024   65980 main.go:141] libmachine: (embed-certs-161370) Getting domain xml...
	I0429 20:06:17.059697   65980 main.go:141] libmachine: (embed-certs-161370) Creating domain...
	I0429 20:06:15.599423   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.599897   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has current primary IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.599915   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Found IP for machine: 192.168.61.106
	I0429 20:06:15.599929   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Reserving static IP address...
	I0429 20:06:15.600318   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Reserved static IP address: 192.168.61.106
	I0429 20:06:15.600360   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-866143", mac: "52:54:00:af:de:09", ip: "192.168.61.106"} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.600375   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for SSH to be available...
	I0429 20:06:15.600405   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | skip adding static IP to network mk-default-k8s-diff-port-866143 - found existing host DHCP lease matching {name: "default-k8s-diff-port-866143", mac: "52:54:00:af:de:09", ip: "192.168.61.106"}
	I0429 20:06:15.600423   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Getting to WaitForSSH function...
	I0429 20:06:15.602983   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.603379   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.603414   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.603581   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Using SSH client type: external
	I0429 20:06:15.603611   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa (-rw-------)
	I0429 20:06:15.603675   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:06:15.603701   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | About to run SSH command:
	I0429 20:06:15.603733   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | exit 0
	I0429 20:06:15.734933   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | SSH cmd err, output: <nil>: 
	I0429 20:06:15.735306   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetConfigRaw
	I0429 20:06:15.735918   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:15.738878   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.739349   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.739385   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.739745   66875 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:06:15.739943   66875 machine.go:94] provisionDockerMachine start ...
	I0429 20:06:15.739966   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:15.740215   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.742731   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.743068   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.743097   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.743253   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.743448   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.743592   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.743729   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.743859   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.744066   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.744080   66875 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:06:15.855258   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:06:15.855292   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:15.855585   66875 buildroot.go:166] provisioning hostname "default-k8s-diff-port-866143"
	I0429 20:06:15.855604   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:15.855792   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.858278   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.858644   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.858672   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.858802   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.858996   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.859179   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.859327   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.859498   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.859667   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.859682   66875 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-866143 && echo "default-k8s-diff-port-866143" | sudo tee /etc/hostname
	I0429 20:06:15.986031   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-866143
	
	I0429 20:06:15.986094   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.989211   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.989633   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.989666   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.989858   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.990078   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.990281   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.990441   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.990591   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.990746   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.990763   66875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-866143' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-866143/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-866143' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:06:16.119358   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:06:16.119389   66875 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:06:16.119420   66875 buildroot.go:174] setting up certificates
	I0429 20:06:16.119431   66875 provision.go:84] configureAuth start
	I0429 20:06:16.119442   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:16.119741   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:16.122611   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.122991   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.123016   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.123180   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.125378   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.125673   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.125713   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.125805   66875 provision.go:143] copyHostCerts
	I0429 20:06:16.125883   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:06:16.125896   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:06:16.125963   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:06:16.126112   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:06:16.126125   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:06:16.126152   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:06:16.126234   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:06:16.126245   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:06:16.126270   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:06:16.126348   66875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-866143 san=[127.0.0.1 192.168.61.106 default-k8s-diff-port-866143 localhost minikube]
	I0429 20:06:16.280583   66875 provision.go:177] copyRemoteCerts
	I0429 20:06:16.280641   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:06:16.280665   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.283452   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.283760   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.283800   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.283999   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.284175   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.284335   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.284428   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:16.374564   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:06:16.408695   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0429 20:06:16.441975   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:06:16.470921   66875 provision.go:87] duration metric: took 351.479703ms to configureAuth
	I0429 20:06:16.470946   66875 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:06:16.471124   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:16.471205   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.473799   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.474105   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.474139   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.474291   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.474502   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.474692   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.474830   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.474995   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:16.475152   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:16.475167   66875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:06:16.774044   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:06:16.774093   66875 machine.go:97] duration metric: took 1.034135495s to provisionDockerMachine
	I0429 20:06:16.774108   66875 start.go:293] postStartSetup for "default-k8s-diff-port-866143" (driver="kvm2")
	I0429 20:06:16.774123   66875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:06:16.774148   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:16.774509   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:06:16.774539   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.777163   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.777603   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.777639   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.777779   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.777949   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.778109   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.778259   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:16.866104   66875 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:06:16.870760   66875 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:06:16.870780   66875 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:06:16.870839   66875 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:06:16.870916   66875 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:06:16.871003   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:06:16.881137   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:16.911284   66875 start.go:296] duration metric: took 137.163661ms for postStartSetup
	I0429 20:06:16.911318   66875 fix.go:56] duration metric: took 20.332102679s for fixHost
	I0429 20:06:16.911337   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.914440   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.914810   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.914838   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.915087   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.915287   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.915511   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.915692   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.915886   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:16.916034   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:16.916045   66875 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:06:17.031867   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421177.003309274
	
	I0429 20:06:17.031892   66875 fix.go:216] guest clock: 1714421177.003309274
	I0429 20:06:17.031900   66875 fix.go:229] Guest: 2024-04-29 20:06:17.003309274 +0000 UTC Remote: 2024-04-29 20:06:16.911322778 +0000 UTC m=+211.453402116 (delta=91.986496ms)
	I0429 20:06:17.031921   66875 fix.go:200] guest clock delta is within tolerance: 91.986496ms
	I0429 20:06:17.031928   66875 start.go:83] releasing machines lock for "default-k8s-diff-port-866143", held for 20.452741912s
	I0429 20:06:17.031957   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.032261   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:17.035096   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.035467   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.035497   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.035620   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036246   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036425   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036515   66875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:06:17.036569   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:17.036698   66875 ssh_runner.go:195] Run: cat /version.json
	I0429 20:06:17.036726   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:17.039300   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039595   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039813   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.039848   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039907   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:17.039984   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.040017   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.040069   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:17.040172   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:17.040230   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:17.040329   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:17.040382   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:17.040483   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:17.040636   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:17.137510   66875 ssh_runner.go:195] Run: systemctl --version
	I0429 20:06:17.160834   66875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:06:17.320792   66875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:06:17.328367   66875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:06:17.328448   66875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:06:17.349698   66875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:06:17.349724   66875 start.go:494] detecting cgroup driver to use...
	I0429 20:06:17.349807   66875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:06:17.372156   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:06:17.388142   66875 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:06:17.388206   66875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:06:17.406108   66875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:06:17.422323   66875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:06:17.555079   66875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:06:17.727126   66875 docker.go:233] disabling docker service ...
	I0429 20:06:17.727194   66875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:06:17.743136   66875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:06:17.757045   66875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:06:17.885705   66875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:06:18.021993   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:06:18.039020   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:06:18.063267   66875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:06:18.063330   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.076473   66875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:06:18.076545   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.089566   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.102912   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.116940   66875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:06:18.130940   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.150505   66875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.177724   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.191088   66875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:06:18.203560   66875 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:06:18.203635   66875 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:06:18.221087   66875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:06:18.233719   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:18.383406   66875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:06:18.543941   66875 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:06:18.544029   66875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:06:18.550828   66875 start.go:562] Will wait 60s for crictl version
	I0429 20:06:18.550891   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:06:18.556158   66875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:06:18.607004   66875 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:06:18.607083   66875 ssh_runner.go:195] Run: crio --version
	I0429 20:06:18.638282   66875 ssh_runner.go:195] Run: crio --version
	I0429 20:06:18.674135   66875 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:06:15.399474   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:15.899352   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.899106   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.899205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.399351   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.899319   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.399303   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.898824   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.675590   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:18.678673   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:18.679055   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:18.679096   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:18.679272   66875 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0429 20:06:18.685110   66875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:18.705804   66875 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:06:18.705967   66875 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:06:18.706036   66875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:18.750754   66875 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:06:18.750823   66875 ssh_runner.go:195] Run: which lz4
	I0429 20:06:18.755893   66875 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:06:18.760892   66875 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:06:18.760921   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:06:19.055680   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:21.552080   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:18.301855   65980 main.go:141] libmachine: (embed-certs-161370) Waiting to get IP...
	I0429 20:06:18.302804   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.303231   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.303273   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.303198   67921 retry.go:31] will retry after 279.123731ms: waiting for machine to come up
	I0429 20:06:18.584013   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.584661   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.584703   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.584630   67921 retry.go:31] will retry after 239.910483ms: waiting for machine to come up
	I0429 20:06:18.825978   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.826393   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.826425   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.826349   67921 retry.go:31] will retry after 312.324444ms: waiting for machine to come up
	I0429 20:06:19.139999   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:19.140583   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:19.140611   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:19.140535   67921 retry.go:31] will retry after 498.525047ms: waiting for machine to come up
	I0429 20:06:19.640244   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:19.640797   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:19.640828   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:19.640756   67921 retry.go:31] will retry after 479.301061ms: waiting for machine to come up
	I0429 20:06:20.121396   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:20.121982   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:20.122015   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:20.121941   67921 retry.go:31] will retry after 706.389673ms: waiting for machine to come up
	I0429 20:06:20.829691   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:20.830191   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:20.830247   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:20.830166   67921 retry.go:31] will retry after 1.145397308s: waiting for machine to come up
	I0429 20:06:21.977290   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:21.977747   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:21.977779   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:21.977691   67921 retry.go:31] will retry after 955.977029ms: waiting for machine to come up
	I0429 20:06:20.399233   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.898571   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.398855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.898885   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.399328   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.398965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.899248   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.398833   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.899039   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.561047   66875 crio.go:462] duration metric: took 1.805186908s to copy over tarball
	I0429 20:06:20.561137   66875 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:23.264543   66875 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.703371921s)
	I0429 20:06:23.264573   66875 crio.go:469] duration metric: took 2.7034954s to extract the tarball
	I0429 20:06:23.264581   66875 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:23.303558   66875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:23.356825   66875 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 20:06:23.356854   66875 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:06:23.356873   66875 kubeadm.go:928] updating node { 192.168.61.106 8444 v1.30.0 crio true true} ...
	I0429 20:06:23.357007   66875 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-866143 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:23.357105   66875 ssh_runner.go:195] Run: crio config
	I0429 20:06:23.414195   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:06:23.414225   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:23.414237   66875 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:23.414267   66875 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.106 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-866143 NodeName:default-k8s-diff-port-866143 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:06:23.414459   66875 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.106
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-866143"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:23.414524   66875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:06:23.425977   66875 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:23.426089   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:23.437270   66875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0429 20:06:23.457613   66875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:23.479383   66875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0429 20:06:23.509517   66875 ssh_runner.go:195] Run: grep 192.168.61.106	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:23.514202   66875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:23.528721   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:23.666941   66875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:23.687710   66875 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143 for IP: 192.168.61.106
	I0429 20:06:23.687745   66875 certs.go:194] generating shared ca certs ...
	I0429 20:06:23.687768   66875 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:23.687952   66875 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:23.688005   66875 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:23.688020   66875 certs.go:256] generating profile certs ...
	I0429 20:06:23.688168   66875 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.key
	I0429 20:06:23.688260   66875 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.key.5d7fbd4b
	I0429 20:06:23.688318   66875 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.key
	I0429 20:06:23.688481   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:23.688532   66875 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:23.688548   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:23.688592   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:23.688628   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:23.688663   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:23.688722   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:23.689611   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:23.743834   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:23.783115   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:23.819086   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:23.850794   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0429 20:06:23.882477   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:23.918607   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:23.947837   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:06:23.977241   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:24.005902   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:24.034910   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:24.064119   66875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:24.083879   66875 ssh_runner.go:195] Run: openssl version
	I0429 20:06:24.090651   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:24.104929   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.110955   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.111034   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.117914   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:24.131076   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:24.144790   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.150842   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.150926   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.157842   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:24.171737   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:24.186164   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.191924   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.191995   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.199385   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:24.213392   66875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:24.219369   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:24.226784   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:24.234655   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:24.242406   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:24.249904   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:24.257400   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:24.264165   66875 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:24.264290   66875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:24.264353   66875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:24.310126   66875 cri.go:89] found id: ""
	I0429 20:06:24.310197   66875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:24.322134   66875 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:24.322155   66875 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:24.322160   66875 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:24.322223   66875 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:24.337713   66875 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:24.339184   66875 kubeconfig.go:125] found "default-k8s-diff-port-866143" server: "https://192.168.61.106:8444"
	I0429 20:06:24.342237   66875 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:24.353500   66875 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.106
	I0429 20:06:24.353545   66875 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:24.353560   66875 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:24.353627   66875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:24.399835   66875 cri.go:89] found id: ""
	I0429 20:06:24.399918   66875 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:24.426456   66875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:24.440261   66875 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:24.440282   66875 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:24.440376   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0429 20:06:24.450699   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:24.450766   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:24.462870   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0429 20:06:24.474894   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:24.474961   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:24.488607   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0429 20:06:24.499626   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:24.499685   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:24.514156   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0429 20:06:24.525958   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:24.526018   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:24.537063   66875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:24.548503   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:24.687916   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:24.051367   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:26.550970   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:22.935362   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:22.935797   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:22.935827   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:22.935746   67921 retry.go:31] will retry after 1.25494649s: waiting for machine to come up
	I0429 20:06:24.192017   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:24.192613   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:24.192641   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:24.192556   67921 retry.go:31] will retry after 1.641885834s: waiting for machine to come up
	I0429 20:06:25.836686   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:25.837170   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:25.837193   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:25.837125   67921 retry.go:31] will retry after 2.794216099s: waiting for machine to come up
	I0429 20:06:25.398515   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:25.898944   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.399360   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.899294   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.399520   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.899434   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.398734   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.898479   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.399413   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.899236   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.234143   66875 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.546180467s)
	I0429 20:06:26.234181   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.502030   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.577778   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.689836   66875 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:26.689982   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.190231   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.690207   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.729434   66875 api_server.go:72] duration metric: took 1.039599386s to wait for apiserver process to appear ...
	I0429 20:06:27.729473   66875 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:06:27.729497   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:27.730016   66875 api_server.go:269] stopped: https://192.168.61.106:8444/healthz: Get "https://192.168.61.106:8444/healthz": dial tcp 192.168.61.106:8444: connect: connection refused
	I0429 20:06:28.230353   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:28.551049   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:31.051387   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:31.411151   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:31.411188   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:31.411205   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:31.424074   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:31.424106   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:31.729916   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:31.737269   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:31.737299   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:32.229834   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:32.237900   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:32.237935   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:32.730529   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:32.735043   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 200:
	ok
	I0429 20:06:32.743999   66875 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:32.744026   66875 api_server.go:131] duration metric: took 5.014546615s to wait for apiserver health ...
	I0429 20:06:32.744035   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:06:32.744041   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:32.745889   66875 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:28.633451   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:28.633950   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:28.633979   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:28.633906   67921 retry.go:31] will retry after 2.251092878s: waiting for machine to come up
	I0429 20:06:30.887722   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:30.888251   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:30.888283   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:30.888208   67921 retry.go:31] will retry after 2.941721217s: waiting for machine to come up
	I0429 20:06:32.747198   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:32.760578   66875 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:32.786719   66875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:32.797795   66875 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:32.797830   66875 system_pods.go:61] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:32.797838   66875 system_pods.go:61] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:32.797844   66875 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:32.797854   66875 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:32.797859   66875 system_pods.go:61] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 20:06:32.797866   66875 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:32.797873   66875 system_pods.go:61] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:32.797880   66875 system_pods.go:61] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 20:06:32.797888   66875 system_pods.go:74] duration metric: took 11.14839ms to wait for pod list to return data ...
	I0429 20:06:32.797902   66875 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:32.801888   66875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:32.801909   66875 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:32.801918   66875 node_conditions.go:105] duration metric: took 4.010782ms to run NodePressure ...
	I0429 20:06:32.801934   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:33.088679   66875 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:33.094165   66875 kubeadm.go:733] kubelet initialised
	I0429 20:06:33.094185   66875 kubeadm.go:734] duration metric: took 5.479589ms waiting for restarted kubelet to initialise ...
	I0429 20:06:33.094192   66875 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:33.101524   66875 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.106879   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.106911   66875 pod_ready.go:81] duration metric: took 5.352162ms for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.106923   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.106946   66875 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.111446   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.111469   66875 pod_ready.go:81] duration metric: took 4.507858ms for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.111478   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.111483   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.115613   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.115643   66875 pod_ready.go:81] duration metric: took 4.152743ms for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.115654   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.115663   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.191660   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.191695   66875 pod_ready.go:81] duration metric: took 76.012388ms for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.191707   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.191713   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.592489   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-proxy-zddtx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.592522   66875 pod_ready.go:81] duration metric: took 400.801861ms for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.592535   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-proxy-zddtx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.592544   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.990624   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.990655   66875 pod_ready.go:81] duration metric: took 398.101779ms for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.990667   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.990673   66875 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:34.391120   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:34.391148   66875 pod_ready.go:81] duration metric: took 400.467456ms for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:34.391165   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:34.391173   66875 pod_ready.go:38] duration metric: took 1.296972775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:34.391191   66875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:06:34.408817   66875 ops.go:34] apiserver oom_adj: -16
	I0429 20:06:34.408845   66875 kubeadm.go:591] duration metric: took 10.086677852s to restartPrimaryControlPlane
	I0429 20:06:34.408856   66875 kubeadm.go:393] duration metric: took 10.144698168s to StartCluster
	I0429 20:06:34.408876   66875 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:34.408961   66875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:34.411093   66875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:34.411379   66875 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:06:34.413055   66875 out.go:177] * Verifying Kubernetes components...
	I0429 20:06:34.411518   66875 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:06:34.411607   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:34.414229   66875 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414239   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:34.414261   66875 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-866143"
	I0429 20:06:34.414238   66875 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414232   66875 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414341   66875 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.414355   66875 addons.go:243] addon metrics-server should already be in state true
	I0429 20:06:34.414382   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.414381   66875 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.414396   66875 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:06:34.414439   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.414650   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414677   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.414746   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414758   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.414890   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414923   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.433279   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I0429 20:06:34.433827   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.434444   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.434474   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.434873   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.435436   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.435483   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.435739   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
	I0429 20:06:34.435746   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0429 20:06:34.436117   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.436245   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.436638   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.436678   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.436734   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.436747   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.437011   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.437057   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.437218   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.437601   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.437630   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.441092   66875 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.441118   66875 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:06:34.441146   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.441550   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.441582   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.451571   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0429 20:06:34.452041   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.452627   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.452650   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.453080   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.453401   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.455145   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0429 20:06:34.455335   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.457339   66875 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:34.455992   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.456826   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I0429 20:06:34.458912   66875 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:06:34.458925   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:06:34.458942   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.459155   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.459818   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.459836   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.460050   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.460068   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.460196   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.460406   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.460450   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.461005   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.461051   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.462529   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.462624   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.464140   66875 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:06:30.398730   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:30.898542   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.399309   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.898751   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.399374   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.899262   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.398723   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.899281   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.399356   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.899305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.463014   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.463255   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.465585   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.465598   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:06:34.465623   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:06:34.465652   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.465703   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.465892   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.466043   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.468951   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.469342   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.469407   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.469645   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.469817   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.469984   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.470137   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.484411   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0429 20:06:34.484864   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.485366   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.485396   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.485759   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.485937   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.487715   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.487962   66875 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:06:34.487975   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:06:34.487989   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.490407   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.490724   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.490748   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.490890   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.491045   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.491146   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.491274   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.618088   66875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:34.638582   66875 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-866143" to be "Ready" ...
	I0429 20:06:34.729046   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:06:34.729633   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:06:34.729649   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:06:34.752200   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:06:34.770107   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:06:34.770143   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:06:34.847081   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:06:34.847117   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:06:34.889992   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:06:35.821090   66875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091986938s)
	I0429 20:06:35.821127   66875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.068905753s)
	I0429 20:06:35.821145   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821150   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821157   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821162   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821490   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821505   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821514   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821524   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821528   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821534   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821549   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821540   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821902   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821923   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821936   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.822007   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.822024   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.828303   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.828348   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.828591   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.828606   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.828632   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.843540   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.843566   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.843860   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.843877   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.843886   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.843894   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.844127   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.844170   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.844188   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.844203   66875 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-866143"
	I0429 20:06:35.846214   66875 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0429 20:06:33.549917   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:35.550564   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:33.831181   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:33.831552   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:33.831581   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:33.831506   67921 retry.go:31] will retry after 5.040485428s: waiting for machine to come up
	I0429 20:06:35.399419   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.899244   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.398934   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.898847   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.399273   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.899102   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.398748   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.399524   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.898813   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.847674   66875 addons.go:505] duration metric: took 1.436173952s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0429 20:06:36.641963   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:38.642738   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:38.873188   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.873625   65980 main.go:141] libmachine: (embed-certs-161370) Found IP for machine: 192.168.50.184
	I0429 20:06:38.873653   65980 main.go:141] libmachine: (embed-certs-161370) Reserving static IP address...
	I0429 20:06:38.873669   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has current primary IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.874037   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "embed-certs-161370", mac: "52:54:00:e6:05:1f", ip: "192.168.50.184"} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:38.874091   65980 main.go:141] libmachine: (embed-certs-161370) Reserved static IP address: 192.168.50.184
	I0429 20:06:38.874113   65980 main.go:141] libmachine: (embed-certs-161370) DBG | skip adding static IP to network mk-embed-certs-161370 - found existing host DHCP lease matching {name: "embed-certs-161370", mac: "52:54:00:e6:05:1f", ip: "192.168.50.184"}
	I0429 20:06:38.874132   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Getting to WaitForSSH function...
	I0429 20:06:38.874151   65980 main.go:141] libmachine: (embed-certs-161370) Waiting for SSH to be available...
	I0429 20:06:38.875891   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.876205   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:38.876237   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.876401   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Using SSH client type: external
	I0429 20:06:38.876425   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa (-rw-------)
	I0429 20:06:38.876455   65980 main.go:141] libmachine: (embed-certs-161370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:06:38.876475   65980 main.go:141] libmachine: (embed-certs-161370) DBG | About to run SSH command:
	I0429 20:06:38.876486   65980 main.go:141] libmachine: (embed-certs-161370) DBG | exit 0
	I0429 20:06:39.006684   65980 main.go:141] libmachine: (embed-certs-161370) DBG | SSH cmd err, output: <nil>: 
	I0429 20:06:39.007072   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetConfigRaw
	I0429 20:06:39.007701   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:39.010189   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.010539   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.010577   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.010783   65980 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/config.json ...
	I0429 20:06:39.010970   65980 machine.go:94] provisionDockerMachine start ...
	I0429 20:06:39.010986   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:39.011196   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.013422   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.013832   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.013862   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.013986   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.014183   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.014377   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.014528   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.014710   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.014868   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.014878   65980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:06:39.119151   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:06:39.119183   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.119425   65980 buildroot.go:166] provisioning hostname "embed-certs-161370"
	I0429 20:06:39.119449   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.119606   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.122418   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.122725   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.122755   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.122894   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.123087   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.123235   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.123371   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.123547   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.123719   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.123734   65980 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-161370 && echo "embed-certs-161370" | sudo tee /etc/hostname
	I0429 20:06:39.247323   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-161370
	
	I0429 20:06:39.247360   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.250202   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.250594   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.250623   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.250761   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.250956   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.251158   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.251354   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.251536   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.251724   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.251746   65980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-161370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-161370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-161370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:06:39.370366   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:06:39.370395   65980 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:06:39.370415   65980 buildroot.go:174] setting up certificates
	I0429 20:06:39.370429   65980 provision.go:84] configureAuth start
	I0429 20:06:39.370441   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.370754   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:39.373600   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.373977   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.374011   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.374305   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.376654   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.376999   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.377032   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.377156   65980 provision.go:143] copyHostCerts
	I0429 20:06:39.377217   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:06:39.377228   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:06:39.377279   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:06:39.377367   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:06:39.377375   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:06:39.377393   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:06:39.377446   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:06:39.377453   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:06:39.377470   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:06:39.377523   65980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.embed-certs-161370 san=[127.0.0.1 192.168.50.184 embed-certs-161370 localhost minikube]
	I0429 20:06:39.441865   65980 provision.go:177] copyRemoteCerts
	I0429 20:06:39.441931   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:06:39.441954   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.445189   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.445633   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.445677   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.445918   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.446166   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.446364   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.446521   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:39.535703   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:06:39.571033   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0429 20:06:39.604181   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:06:39.639250   65980 provision.go:87] duration metric: took 268.808275ms to configureAuth
	I0429 20:06:39.639339   65980 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:06:39.639575   65980 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:39.639668   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.642544   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.642975   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.643006   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.643146   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.643348   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.643507   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.643671   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.643838   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.644011   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.644039   65980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:06:39.974134   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:06:39.974168   65980 machine.go:97] duration metric: took 963.184467ms to provisionDockerMachine
	I0429 20:06:39.974186   65980 start.go:293] postStartSetup for "embed-certs-161370" (driver="kvm2")
	I0429 20:06:39.974201   65980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:06:39.974229   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:39.974601   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:06:39.974636   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.977843   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.978295   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.978328   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.978528   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.978768   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.978939   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.979144   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.066379   65980 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:06:40.071720   65980 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:06:40.071742   65980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:06:40.071798   65980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:06:40.071875   65980 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:06:40.071965   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:06:40.082556   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:40.112774   65980 start.go:296] duration metric: took 138.571139ms for postStartSetup
	I0429 20:06:40.112827   65980 fix.go:56] duration metric: took 23.080734046s for fixHost
	I0429 20:06:40.112859   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.115931   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.116414   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.116448   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.116643   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.116859   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.117026   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.117169   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.117358   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:40.117560   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:40.117576   65980 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:06:40.223697   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421200.206855033
	
	I0429 20:06:40.223722   65980 fix.go:216] guest clock: 1714421200.206855033
	I0429 20:06:40.223732   65980 fix.go:229] Guest: 2024-04-29 20:06:40.206855033 +0000 UTC Remote: 2024-04-29 20:06:40.112832003 +0000 UTC m=+362.399028562 (delta=94.02303ms)
	I0429 20:06:40.223777   65980 fix.go:200] guest clock delta is within tolerance: 94.02303ms
	I0429 20:06:40.223782   65980 start.go:83] releasing machines lock for "embed-certs-161370", held for 23.191744513s
	I0429 20:06:40.223804   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.224106   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:40.226904   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.227299   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.227328   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.227462   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.227955   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.228117   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.228199   65980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:06:40.228238   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.228353   65980 ssh_runner.go:195] Run: cat /version.json
	I0429 20:06:40.228378   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.230943   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231151   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231370   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.231401   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231585   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.231595   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.231629   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231794   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.231806   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.231982   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.232000   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.232182   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.232197   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.232303   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.337533   65980 ssh_runner.go:195] Run: systemctl --version
	I0429 20:06:40.347252   65980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:06:40.494668   65980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:06:40.502707   65980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:06:40.502788   65980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:06:40.522261   65980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:06:40.522298   65980 start.go:494] detecting cgroup driver to use...
	I0429 20:06:40.522368   65980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:06:40.540576   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:06:40.557130   65980 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:06:40.557203   65980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:06:40.573803   65980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:06:40.589730   65980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:06:40.731625   65980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:06:40.902594   65980 docker.go:233] disabling docker service ...
	I0429 20:06:40.902665   65980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:06:40.921454   65980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:06:40.938734   65980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:06:41.081822   65980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:06:41.237778   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:06:41.254086   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:06:41.276277   65980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:06:41.276362   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.288903   65980 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:06:41.288972   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.301347   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.313639   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.325885   65980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:06:41.338215   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.350839   65980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.372124   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.385505   65980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:06:41.397626   65980 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:06:41.397704   65980 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:06:41.413915   65980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:06:41.427068   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:41.575690   65980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:06:41.748047   65980 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:06:41.748132   65980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:06:41.753313   65980 start.go:562] Will wait 60s for crictl version
	I0429 20:06:41.753379   65980 ssh_runner.go:195] Run: which crictl
	I0429 20:06:41.757672   65980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:06:41.794045   65980 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:06:41.794150   65980 ssh_runner.go:195] Run: crio --version
	I0429 20:06:41.831177   65980 ssh_runner.go:195] Run: crio --version
	I0429 20:06:41.865125   65980 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:06:38.049006   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:40.050003   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:42.050213   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:41.866698   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:41.869477   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:41.869815   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:41.869848   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:41.870107   65980 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0429 20:06:41.874917   65980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:41.889196   65980 kubeadm.go:877] updating cluster {Name:embed-certs-161370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:06:41.889353   65980 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:06:41.889423   65980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:41.936285   65980 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:06:41.936352   65980 ssh_runner.go:195] Run: which lz4
	I0429 20:06:41.941893   65980 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:06:41.947071   65980 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:06:41.947112   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:06:40.399024   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:40.899056   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.399275   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.899285   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.399200   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.899243   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.398590   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.899346   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.143962   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:41.645981   66875 node_ready.go:49] node "default-k8s-diff-port-866143" has status "Ready":"True"
	I0429 20:06:41.646007   66875 node_ready.go:38] duration metric: took 7.007388661s for node "default-k8s-diff-port-866143" to be "Ready" ...
	I0429 20:06:41.646018   66875 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:41.652664   66875 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.657667   66875 pod_ready.go:92] pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.657685   66875 pod_ready.go:81] duration metric: took 4.993051ms for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.657694   66875 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.662632   66875 pod_ready.go:92] pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.662650   66875 pod_ready.go:81] duration metric: took 4.950519ms for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.662658   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.667488   66875 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.667509   66875 pod_ready.go:81] duration metric: took 4.844299ms for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.667520   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.672480   66875 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.672501   66875 pod_ready.go:81] duration metric: took 4.974639ms for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.672512   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:42.042828   66875 pod_ready.go:92] pod "kube-proxy-zddtx" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:42.042856   66875 pod_ready.go:81] duration metric: took 370.336555ms for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:42.042868   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.051930   66875 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:44.548970   66875 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:44.548999   66875 pod_ready.go:81] duration metric: took 2.506120519s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.549011   66875 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.051077   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:46.052233   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:43.759688   65980 crio.go:462] duration metric: took 1.817838869s to copy over tarball
	I0429 20:06:43.759784   65980 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:46.405802   65980 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.64598022s)
	I0429 20:06:46.405851   65980 crio.go:469] duration metric: took 2.646122331s to extract the tarball
	I0429 20:06:46.405861   65980 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:46.444700   65980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:46.503047   65980 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 20:06:46.503086   65980 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:06:46.503098   65980 kubeadm.go:928] updating node { 192.168.50.184 8443 v1.30.0 crio true true} ...
	I0429 20:06:46.503234   65980 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-161370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:46.503334   65980 ssh_runner.go:195] Run: crio config
	I0429 20:06:46.563489   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:06:46.563511   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:46.563523   65980 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:46.563542   65980 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-161370 NodeName:embed-certs-161370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:06:46.563662   65980 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-161370"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:46.563719   65980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:06:46.576288   65980 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:46.576350   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:46.586807   65980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0429 20:06:46.605883   65980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:46.626741   65980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0429 20:06:46.647223   65980 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:46.652262   65980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:46.667095   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:46.804937   65980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:46.831022   65980 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370 for IP: 192.168.50.184
	I0429 20:06:46.831048   65980 certs.go:194] generating shared ca certs ...
	I0429 20:06:46.831067   65980 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:46.831251   65980 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:46.831295   65980 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:46.831306   65980 certs.go:256] generating profile certs ...
	I0429 20:06:46.831385   65980 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/client.key
	I0429 20:06:46.831440   65980 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.key.9384fac7
	I0429 20:06:46.831476   65980 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.key
	I0429 20:06:46.831582   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:46.831610   65980 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:46.831617   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:46.831635   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:46.831662   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:46.831691   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:46.831729   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:46.832571   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:46.896363   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:46.939336   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:46.976256   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:47.007777   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0429 20:06:47.045019   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:47.079584   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:47.114002   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:06:47.142163   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:47.170063   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:47.199098   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:47.228985   65980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:47.250928   65980 ssh_runner.go:195] Run: openssl version
	I0429 20:06:47.258215   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:47.271653   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.277100   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.277183   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.283876   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:47.297519   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:47.311104   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.316347   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.316408   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.322992   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:47.337744   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:47.351332   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.356912   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.356964   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.363339   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:47.378501   65980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:47.383995   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:47.391157   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:47.398039   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:47.405117   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:47.412125   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:47.419257   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:47.425917   65980 kubeadm.go:391] StartCluster: {Name:embed-certs-161370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:47.426009   65980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:47.426049   65980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:47.469133   65980 cri.go:89] found id: ""
	I0429 20:06:47.469216   65980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:47.481852   65980 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:47.481878   65980 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:47.481883   65980 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:47.481926   65980 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:47.495254   65980 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:47.496760   65980 kubeconfig.go:125] found "embed-certs-161370" server: "https://192.168.50.184:8443"
	I0429 20:06:47.499898   65980 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:47.511866   65980 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.184
	I0429 20:06:47.511903   65980 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:47.511917   65980 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:47.511972   65980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:47.563879   65980 cri.go:89] found id: ""
	I0429 20:06:47.563956   65980 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:47.583490   65980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:47.595867   65980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:47.595893   65980 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:47.595947   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:47.608218   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:47.608283   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:47.620329   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:47.631394   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:47.631527   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:47.643107   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:47.654164   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:47.654233   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:47.665890   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:47.676817   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:47.676859   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:47.688608   65980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:47.700068   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:45.398908   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:45.898619   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.398795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.899058   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.399257   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.899269   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.398874   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.898653   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.399305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.898855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.556692   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:49.056546   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:48.550949   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:50.551905   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:47.821391   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.623284   65980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.31791052s)
	I0429 20:06:49.623343   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.870630   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.950525   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:50.061240   65980 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:50.061331   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.562165   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.062299   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.139853   65980 api_server.go:72] duration metric: took 1.078602354s to wait for apiserver process to appear ...
	I0429 20:06:51.139883   65980 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:06:51.139905   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:51.140472   65980 api_server.go:269] stopped: https://192.168.50.184:8443/healthz: Get "https://192.168.50.184:8443/healthz": dial tcp 192.168.50.184:8443: connect: connection refused
	I0429 20:06:51.640813   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:50.398577   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.899284   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.399361   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.899134   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.399211   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.898733   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.399280   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.898915   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.399264   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.898840   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.057650   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:53.559429   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:53.049570   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:55.049866   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:57.050558   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:54.540707   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:54.540765   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:54.540797   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:54.618982   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:54.619016   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:54.640333   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:54.674491   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:54.674542   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:55.140955   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:55.157479   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:55.157517   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:55.639999   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:55.646278   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:55.646311   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:56.140938   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:56.147336   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:56.147371   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:56.640927   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:56.647027   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:56.647054   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:57.140894   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:57.145193   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:57.145236   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:57.640842   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:57.645453   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:57.645478   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:58.140524   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:58.146317   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0429 20:06:58.153972   65980 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:58.154011   65980 api_server.go:131] duration metric: took 7.014120443s to wait for apiserver health ...
	I0429 20:06:58.154028   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:06:58.154036   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:58.155341   65980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:55.398622   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:55.898563   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.399306   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.898473   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.899278   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.399121   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.899291   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.399197   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.898901   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.056503   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:58.056988   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:59.053737   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:01.555480   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:58.156794   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:58.176870   65980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:58.215333   65980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:58.230619   65980 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:58.230655   65980 system_pods.go:61] "coredns-7db6d8ff4d-wjfff" [bd92e456-b538-49ae-984b-c6bcea6add30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:58.230667   65980 system_pods.go:61] "etcd-embed-certs-161370" [da2d022f-33c4-49b7-b997-a6783043f3e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:58.230675   65980 system_pods.go:61] "kube-apiserver-embed-certs-161370" [032913c9-bb91-46ba-ad4d-a4d5b63d806f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:58.230681   65980 system_pods.go:61] "kube-controller-manager-embed-certs-161370" [2f3ae1ff-0688-4c70-a888-5e1e640f64bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:58.230685   65980 system_pods.go:61] "kube-proxy-9kmg8" [01bbd2ca-24d2-4c7c-b4ea-79604ac3f486] Running
	I0429 20:06:58.230689   65980 system_pods.go:61] "kube-scheduler-embed-certs-161370" [c88ab7cc-1aef-48cb-814e-eff8e946885c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:58.230694   65980 system_pods.go:61] "metrics-server-569cc877fc-c4h7f" [bf1cae8d-cca1-4573-935f-e60118ca9575] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:58.230698   65980 system_pods.go:61] "storage-provisioner" [1686a084-f28b-4b26-b748-85a2a3613dde] Running
	I0429 20:06:58.230703   65980 system_pods.go:74] duration metric: took 15.348727ms to wait for pod list to return data ...
	I0429 20:06:58.230713   65980 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:58.233411   65980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:58.233436   65980 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:58.233447   65980 node_conditions.go:105] duration metric: took 2.729694ms to run NodePressure ...
	I0429 20:06:58.233466   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:58.532729   65980 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:58.538018   65980 kubeadm.go:733] kubelet initialised
	I0429 20:06:58.538038   65980 kubeadm.go:734] duration metric: took 5.283028ms waiting for restarted kubelet to initialise ...
	I0429 20:06:58.538046   65980 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:58.544267   65980 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:00.553501   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:00.398537   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.899359   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.399125   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.899428   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.399457   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.899355   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.399421   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.899376   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.399331   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.899263   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.555517   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:02.557429   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:05.056216   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:04.049941   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:06.051285   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:03.069330   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:03.554710   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.554732   65980 pod_ready.go:81] duration metric: took 5.010440873s for pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.554742   65980 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.562277   65980 pod_ready.go:92] pod "etcd-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.562298   65980 pod_ready.go:81] duration metric: took 7.550156ms for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.562306   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.567038   65980 pod_ready.go:92] pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.567060   65980 pod_ready.go:81] duration metric: took 4.748002ms for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.567069   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.573632   65980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.573664   65980 pod_ready.go:81] duration metric: took 1.006574407s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.573675   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9kmg8" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.578356   65980 pod_ready.go:92] pod "kube-proxy-9kmg8" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.578377   65980 pod_ready.go:81] duration metric: took 4.694437ms for pod "kube-proxy-9kmg8" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.578388   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.749703   65980 pod_ready.go:92] pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.749733   65980 pod_ready.go:81] duration metric: took 171.336391ms for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.749750   65980 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:06.757041   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:05.398458   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:05.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.399205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.399308   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.898749   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:08.399182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:08.399271   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:08.448015   66615 cri.go:89] found id: ""
	I0429 20:07:08.448041   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.448049   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:08.448055   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:08.448103   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:08.491239   66615 cri.go:89] found id: ""
	I0429 20:07:08.491265   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.491274   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:08.491280   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:08.491330   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:08.541203   66615 cri.go:89] found id: ""
	I0429 20:07:08.541226   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.541234   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:08.541239   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:08.541300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:08.584370   66615 cri.go:89] found id: ""
	I0429 20:07:08.584393   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.584401   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:08.584407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:08.584469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:08.625126   66615 cri.go:89] found id: ""
	I0429 20:07:08.625158   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.625169   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:08.625182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:08.625246   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:08.666987   66615 cri.go:89] found id: ""
	I0429 20:07:08.667018   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.667032   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:08.667039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:08.667105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:08.712363   66615 cri.go:89] found id: ""
	I0429 20:07:08.712394   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.712405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:08.712413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:08.712471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:08.762122   66615 cri.go:89] found id: ""
	I0429 20:07:08.762151   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.762170   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:08.762180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:08.762196   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:08.808218   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:08.808246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:08.867278   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:08.867317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:08.884230   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:08.884266   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:09.018183   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:09.018208   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:09.018224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:07.555443   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:09.557653   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:08.551823   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.051232   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:09.257687   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.758829   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.587112   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:11.603711   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:11.603781   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:11.651087   66615 cri.go:89] found id: ""
	I0429 20:07:11.651115   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.651123   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:11.651128   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:11.651192   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:11.691888   66615 cri.go:89] found id: ""
	I0429 20:07:11.691914   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.691921   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:11.691928   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:11.691976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:11.733411   66615 cri.go:89] found id: ""
	I0429 20:07:11.733441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.733452   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:11.733460   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:11.733517   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:11.774620   66615 cri.go:89] found id: ""
	I0429 20:07:11.774648   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.774659   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:11.774666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:11.774729   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:11.821410   66615 cri.go:89] found id: ""
	I0429 20:07:11.821441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.821449   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:11.821455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:11.821502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:11.864699   66615 cri.go:89] found id: ""
	I0429 20:07:11.864730   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.864741   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:11.864749   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:11.864809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:11.904637   66615 cri.go:89] found id: ""
	I0429 20:07:11.904678   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.904687   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:11.904693   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:11.904742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:11.970914   66615 cri.go:89] found id: ""
	I0429 20:07:11.970945   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.970957   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:11.970968   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:11.970984   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:12.024185   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:12.024226   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:12.040319   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:12.040349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:12.137888   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:12.137915   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:12.137941   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:12.210256   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:12.210290   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:14.758756   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:14.775321   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:14.775386   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:14.812637   66615 cri.go:89] found id: ""
	I0429 20:07:14.812662   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.812672   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:14.812679   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:14.812735   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:14.851503   66615 cri.go:89] found id: ""
	I0429 20:07:14.851536   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.851547   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:14.851554   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:14.851613   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:14.885708   66615 cri.go:89] found id: ""
	I0429 20:07:14.885739   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.885749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:14.885756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:14.885817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:14.926133   66615 cri.go:89] found id: ""
	I0429 20:07:14.926162   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.926173   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:14.926181   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:14.926240   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:12.056093   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.056500   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:13.549924   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:15.550544   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.257394   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:16.756833   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.967553   66615 cri.go:89] found id: ""
	I0429 20:07:14.967582   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.967593   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:14.967601   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:14.967659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:15.006174   66615 cri.go:89] found id: ""
	I0429 20:07:15.006199   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.006207   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:15.006218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:15.006293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:15.046916   66615 cri.go:89] found id: ""
	I0429 20:07:15.046940   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.046947   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:15.046953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:15.047009   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:15.089229   66615 cri.go:89] found id: ""
	I0429 20:07:15.089256   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.089266   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:15.089278   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:15.089298   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:15.143518   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:15.143561   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:15.162742   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:15.162769   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:15.242850   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:15.242872   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:15.242884   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:15.315783   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:15.315825   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:17.863336   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:17.877802   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:17.877869   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:17.935714   66615 cri.go:89] found id: ""
	I0429 20:07:17.935738   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.935746   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:17.935754   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:17.935810   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:17.988496   66615 cri.go:89] found id: ""
	I0429 20:07:17.988529   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.988540   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:17.988547   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:17.988610   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:18.030695   66615 cri.go:89] found id: ""
	I0429 20:07:18.030726   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.030737   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:18.030745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:18.030822   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:18.077452   66615 cri.go:89] found id: ""
	I0429 20:07:18.077481   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.077491   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:18.077498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:18.077561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:18.120102   66615 cri.go:89] found id: ""
	I0429 20:07:18.120127   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.120136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:18.120141   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:18.120200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:18.163440   66615 cri.go:89] found id: ""
	I0429 20:07:18.163469   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.163480   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:18.163487   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:18.163549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:18.202650   66615 cri.go:89] found id: ""
	I0429 20:07:18.202680   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.202693   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:18.202699   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:18.202760   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:18.244378   66615 cri.go:89] found id: ""
	I0429 20:07:18.244408   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.244418   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:18.244429   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:18.244446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:18.289246   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:18.289279   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:18.343382   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:18.343425   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:18.359070   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:18.359103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:18.440316   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:18.440337   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:18.440351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:16.555711   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.555851   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.051297   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:20.551594   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.756941   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:20.756974   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:22.757155   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:21.019552   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:21.036407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:21.036523   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:21.083148   66615 cri.go:89] found id: ""
	I0429 20:07:21.083171   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.083179   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:21.083184   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:21.083231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:21.129382   66615 cri.go:89] found id: ""
	I0429 20:07:21.129415   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.129426   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:21.129434   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:21.129502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:21.172978   66615 cri.go:89] found id: ""
	I0429 20:07:21.173007   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.173015   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:21.173020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:21.173068   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:21.218124   66615 cri.go:89] found id: ""
	I0429 20:07:21.218159   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.218171   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:21.218178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:21.218243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:21.260603   66615 cri.go:89] found id: ""
	I0429 20:07:21.260640   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.260651   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:21.260658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:21.260723   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:21.302351   66615 cri.go:89] found id: ""
	I0429 20:07:21.302386   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.302398   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:21.302407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:21.302498   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:21.347003   66615 cri.go:89] found id: ""
	I0429 20:07:21.347028   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.347037   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:21.347043   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:21.347098   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:21.388202   66615 cri.go:89] found id: ""
	I0429 20:07:21.388236   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.388245   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:21.388257   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:21.388272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:21.442706   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:21.442744   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:21.457453   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:21.457489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:21.539669   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:21.539695   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:21.539707   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:21.625210   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:21.625247   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.173256   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:24.189920   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:24.189990   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:24.236730   66615 cri.go:89] found id: ""
	I0429 20:07:24.236761   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.236772   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:24.236779   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:24.236843   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:24.279031   66615 cri.go:89] found id: ""
	I0429 20:07:24.279055   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.279062   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:24.279067   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:24.279112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:24.321622   66615 cri.go:89] found id: ""
	I0429 20:07:24.321647   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.321657   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:24.321665   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:24.321726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:24.360884   66615 cri.go:89] found id: ""
	I0429 20:07:24.360911   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.360919   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:24.360924   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:24.360983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:24.414439   66615 cri.go:89] found id: ""
	I0429 20:07:24.414463   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.414472   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:24.414477   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:24.414559   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:24.456994   66615 cri.go:89] found id: ""
	I0429 20:07:24.457023   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.457033   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:24.457041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:24.457107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:24.497991   66615 cri.go:89] found id: ""
	I0429 20:07:24.498026   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.498036   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:24.498044   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:24.498137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:24.539375   66615 cri.go:89] found id: ""
	I0429 20:07:24.539415   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.539426   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:24.539438   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:24.539453   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:24.661778   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:24.661804   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:24.661820   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:24.748180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:24.748215   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.795963   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:24.795999   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:24.851485   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:24.851524   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:20.556543   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:22.556775   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:24.559759   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:23.052715   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:25.550857   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.551209   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:25.256195   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.258199   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.367869   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:27.385633   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:27.385716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:27.423181   66615 cri.go:89] found id: ""
	I0429 20:07:27.423210   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.423222   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:27.423233   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:27.423293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:27.467385   66615 cri.go:89] found id: ""
	I0429 20:07:27.467419   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.467432   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:27.467439   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:27.467503   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:27.506171   66615 cri.go:89] found id: ""
	I0429 20:07:27.506204   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.506216   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:27.506223   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:27.506272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:27.545043   66615 cri.go:89] found id: ""
	I0429 20:07:27.545066   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.545074   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:27.545080   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:27.545136   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:27.592279   66615 cri.go:89] found id: ""
	I0429 20:07:27.592306   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.592314   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:27.592320   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:27.592379   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:27.628569   66615 cri.go:89] found id: ""
	I0429 20:07:27.628595   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.628604   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:27.628612   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:27.628659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:27.667937   66615 cri.go:89] found id: ""
	I0429 20:07:27.667967   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.667978   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:27.667985   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:27.668047   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:27.708813   66615 cri.go:89] found id: ""
	I0429 20:07:27.708844   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.708853   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:27.708861   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:27.708876   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:27.789589   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:27.789625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:27.837147   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:27.837180   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:27.891928   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:27.891956   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:27.906162   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:27.906188   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:27.983738   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:27.057372   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:29.555872   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:30.049373   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:32.052279   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:29.758675   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:32.257486   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:30.484404   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:30.503968   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:30.504041   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:30.553070   66615 cri.go:89] found id: ""
	I0429 20:07:30.553099   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.553111   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:30.553118   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:30.553180   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:30.609226   66615 cri.go:89] found id: ""
	I0429 20:07:30.609253   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.609262   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:30.609267   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:30.609324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:30.658359   66615 cri.go:89] found id: ""
	I0429 20:07:30.658384   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.658395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:30.658401   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:30.658459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:30.710024   66615 cri.go:89] found id: ""
	I0429 20:07:30.710048   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.710058   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:30.710114   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:30.710173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:30.752361   66615 cri.go:89] found id: ""
	I0429 20:07:30.752388   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.752398   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:30.752405   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:30.752469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:30.793311   66615 cri.go:89] found id: ""
	I0429 20:07:30.793333   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.793341   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:30.793347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:30.793394   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:30.832371   66615 cri.go:89] found id: ""
	I0429 20:07:30.832400   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.832411   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:30.832417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:30.832469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:30.871183   66615 cri.go:89] found id: ""
	I0429 20:07:30.871215   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.871226   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:30.871237   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:30.871253   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:30.929909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:30.929947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:30.944454   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:30.944482   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:31.022060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:31.022100   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:31.022116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:31.104142   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:31.104185   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:33.651167   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:33.667888   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:33.667948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:33.708455   66615 cri.go:89] found id: ""
	I0429 20:07:33.708484   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.708495   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:33.708502   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:33.708561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:33.747578   66615 cri.go:89] found id: ""
	I0429 20:07:33.747602   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.747611   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:33.747616   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:33.747661   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:33.796005   66615 cri.go:89] found id: ""
	I0429 20:07:33.796036   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.796056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:33.796064   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:33.796128   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:33.836238   66615 cri.go:89] found id: ""
	I0429 20:07:33.836263   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.836271   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:33.836276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:33.836324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:33.877010   66615 cri.go:89] found id: ""
	I0429 20:07:33.877043   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.877056   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:33.877065   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:33.877137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:33.919690   66615 cri.go:89] found id: ""
	I0429 20:07:33.919714   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.919722   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:33.919727   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:33.919797   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:33.959857   66615 cri.go:89] found id: ""
	I0429 20:07:33.959889   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.959900   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:33.959907   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:33.959989   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:33.996349   66615 cri.go:89] found id: ""
	I0429 20:07:33.996376   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.996386   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:33.996396   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:33.996433   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:34.010773   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:34.010808   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:34.091581   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:34.091599   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:34.091611   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:34.173266   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:34.173299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:34.221447   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:34.221479   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:32.055352   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.056364   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.550100   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.550663   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.756264   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.756583   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.776486   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:36.791630   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:36.791764   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:36.837475   66615 cri.go:89] found id: ""
	I0429 20:07:36.837503   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.837513   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:36.837521   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:36.837607   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.879902   66615 cri.go:89] found id: ""
	I0429 20:07:36.879936   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.879947   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:36.879954   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:36.880021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:36.918566   66615 cri.go:89] found id: ""
	I0429 20:07:36.918594   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.918608   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:36.918613   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:36.918659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:36.958876   66615 cri.go:89] found id: ""
	I0429 20:07:36.958937   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.958948   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:36.958959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:36.959008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:36.998790   66615 cri.go:89] found id: ""
	I0429 20:07:36.998820   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.998845   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:36.998864   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:36.998932   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:37.036933   66615 cri.go:89] found id: ""
	I0429 20:07:37.036962   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.036972   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:37.036979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:37.037024   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:37.076560   66615 cri.go:89] found id: ""
	I0429 20:07:37.076597   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.076609   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:37.076616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:37.076688   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:37.118324   66615 cri.go:89] found id: ""
	I0429 20:07:37.118351   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.118360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:37.118368   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:37.118380   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:37.194671   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:37.194714   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:37.236269   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:37.236300   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:37.297006   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:37.297061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:37.312696   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:37.312723   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:37.387132   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:39.888111   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:39.903157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:39.903236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:39.945913   66615 cri.go:89] found id: ""
	I0429 20:07:39.945945   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.945956   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:39.945980   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:39.946076   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.056553   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:38.057230   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:39.050274   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:41.053502   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:38.756717   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:40.762297   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:39.986494   66615 cri.go:89] found id: ""
	I0429 20:07:39.986521   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.986530   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:39.986538   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:39.986598   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:40.031481   66615 cri.go:89] found id: ""
	I0429 20:07:40.031520   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.031531   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:40.031539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:40.031604   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:40.076792   66615 cri.go:89] found id: ""
	I0429 20:07:40.076816   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.076824   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:40.076830   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:40.076877   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:40.121020   66615 cri.go:89] found id: ""
	I0429 20:07:40.121050   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.121061   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:40.121068   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:40.121134   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:40.173189   66615 cri.go:89] found id: ""
	I0429 20:07:40.173221   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.173233   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:40.173241   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:40.173303   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:40.220190   66615 cri.go:89] found id: ""
	I0429 20:07:40.220212   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.220223   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:40.220229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:40.220293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:40.262552   66615 cri.go:89] found id: ""
	I0429 20:07:40.262579   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.262588   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:40.262600   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:40.262616   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:40.322249   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:40.322289   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:40.338703   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:40.338734   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:40.431311   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:40.431333   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:40.431345   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:40.518410   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:40.518446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:43.062556   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:43.077757   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:43.077844   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:43.129247   66615 cri.go:89] found id: ""
	I0429 20:07:43.129277   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.129289   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:43.129296   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:43.129364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:43.173474   66615 cri.go:89] found id: ""
	I0429 20:07:43.173501   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.173509   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:43.173514   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:43.173566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:43.218788   66615 cri.go:89] found id: ""
	I0429 20:07:43.218812   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.218820   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:43.218825   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:43.218873   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:43.259269   66615 cri.go:89] found id: ""
	I0429 20:07:43.259289   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.259297   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:43.259302   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:43.259362   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:43.301152   66615 cri.go:89] found id: ""
	I0429 20:07:43.301180   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.301189   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:43.301195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:43.301244   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:43.338183   66615 cri.go:89] found id: ""
	I0429 20:07:43.338211   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.338222   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:43.338229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:43.338276   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:43.376919   66615 cri.go:89] found id: ""
	I0429 20:07:43.376946   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.376958   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:43.376966   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:43.377032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:43.417421   66615 cri.go:89] found id: ""
	I0429 20:07:43.417450   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.417457   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:43.417465   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:43.417478   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:43.470009   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:43.470040   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:43.486059   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:43.486109   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:43.561688   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:43.561709   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:43.561725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:43.649713   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:43.649750   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:40.555780   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.056758   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.552176   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:46.049393   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.256870   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:45.258520   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:47.757738   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:46.194996   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:46.210261   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:46.210342   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:46.249208   66615 cri.go:89] found id: ""
	I0429 20:07:46.249240   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.249253   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:46.249260   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:46.249336   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:46.287285   66615 cri.go:89] found id: ""
	I0429 20:07:46.287315   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.287328   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:46.287335   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:46.287397   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:46.327944   66615 cri.go:89] found id: ""
	I0429 20:07:46.327976   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.327988   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:46.327996   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:46.328061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:46.373875   66615 cri.go:89] found id: ""
	I0429 20:07:46.373899   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.373908   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:46.373914   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:46.373967   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:46.413748   66615 cri.go:89] found id: ""
	I0429 20:07:46.413774   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.413783   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:46.413789   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:46.413853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:46.459380   66615 cri.go:89] found id: ""
	I0429 20:07:46.459412   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.459424   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:46.459432   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:46.459496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:46.499833   66615 cri.go:89] found id: ""
	I0429 20:07:46.499861   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.499870   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:46.499876   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:46.499939   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:46.541025   66615 cri.go:89] found id: ""
	I0429 20:07:46.541055   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.541068   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:46.541080   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:46.541096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:46.601187   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:46.601224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:46.617399   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:46.617426   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:46.697076   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:46.697113   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:46.697129   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:46.783265   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:46.783303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.335795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:49.350030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:49.350116   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:49.390278   66615 cri.go:89] found id: ""
	I0429 20:07:49.390315   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.390326   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:49.390333   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:49.390388   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:49.431145   66615 cri.go:89] found id: ""
	I0429 20:07:49.431175   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.431186   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:49.431193   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:49.431252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:49.473965   66615 cri.go:89] found id: ""
	I0429 20:07:49.473997   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.474014   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:49.474022   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:49.474105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:49.515372   66615 cri.go:89] found id: ""
	I0429 20:07:49.515407   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.515419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:49.515427   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:49.515487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:49.552541   66615 cri.go:89] found id: ""
	I0429 20:07:49.552567   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.552576   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:49.552582   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:49.552650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:49.599628   66615 cri.go:89] found id: ""
	I0429 20:07:49.599660   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.599672   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:49.599680   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:49.599745   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:49.642705   66615 cri.go:89] found id: ""
	I0429 20:07:49.642741   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.642752   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:49.642759   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:49.642827   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:49.679864   66615 cri.go:89] found id: ""
	I0429 20:07:49.679888   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.679896   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:49.679905   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:49.679919   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:49.765967   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:49.765986   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:49.766010   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:49.852739   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:49.852779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.905586   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:49.905613   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:45.559781   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:48.059952   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:48.049788   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:50.548836   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.551059   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:50.256898   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.757213   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:49.959443   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:49.959474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.476677   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:52.491378   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:52.491458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:52.535801   66615 cri.go:89] found id: ""
	I0429 20:07:52.535827   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.535835   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:52.535841   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:52.535901   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:52.582895   66615 cri.go:89] found id: ""
	I0429 20:07:52.582932   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.582944   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:52.582952   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:52.583022   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:52.627070   66615 cri.go:89] found id: ""
	I0429 20:07:52.627096   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.627113   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:52.627120   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:52.627181   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:52.673312   66615 cri.go:89] found id: ""
	I0429 20:07:52.673339   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.673348   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:52.673353   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:52.673399   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:52.713099   66615 cri.go:89] found id: ""
	I0429 20:07:52.713124   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.713131   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:52.713139   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:52.713205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:52.761982   66615 cri.go:89] found id: ""
	I0429 20:07:52.762007   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.762017   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:52.762024   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:52.762108   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:52.801019   66615 cri.go:89] found id: ""
	I0429 20:07:52.801048   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.801059   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:52.801067   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:52.801141   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:52.842544   66615 cri.go:89] found id: ""
	I0429 20:07:52.842578   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.842602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:52.842613   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:52.842630   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:52.896409   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:52.896442   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.912625   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:52.912650   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:52.992231   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:52.992260   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:52.992276   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:53.077473   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:53.077507   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:50.555818   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.556860   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:54.557161   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:54.554094   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:57.049699   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:55.257406   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:57.257840   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:55.625557   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:55.640211   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:55.640284   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:55.683215   66615 cri.go:89] found id: ""
	I0429 20:07:55.683250   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.683259   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:55.683275   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:55.683341   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:55.730820   66615 cri.go:89] found id: ""
	I0429 20:07:55.730851   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.730862   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:55.730869   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:55.730928   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:55.771784   66615 cri.go:89] found id: ""
	I0429 20:07:55.771808   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.771816   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:55.771821   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:55.771866   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:55.814988   66615 cri.go:89] found id: ""
	I0429 20:07:55.815021   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.815034   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:55.815042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:55.815114   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:55.859293   66615 cri.go:89] found id: ""
	I0429 20:07:55.859327   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.859340   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:55.859349   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:55.859416   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:55.901802   66615 cri.go:89] found id: ""
	I0429 20:07:55.901833   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.901844   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:55.901852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:55.901921   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:55.943863   66615 cri.go:89] found id: ""
	I0429 20:07:55.943895   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.943905   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:55.943913   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:55.943977   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:55.986256   66615 cri.go:89] found id: ""
	I0429 20:07:55.986284   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.986296   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:55.986314   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:55.986332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:56.036710   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:56.036742   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:56.099909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:56.099945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:56.117630   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:56.117660   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:56.197396   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.197421   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:56.197436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:58.779065   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:58.794086   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:58.794168   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:58.844035   66615 cri.go:89] found id: ""
	I0429 20:07:58.844062   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.844070   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:58.844076   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:58.844133   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:58.887859   66615 cri.go:89] found id: ""
	I0429 20:07:58.887889   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.887900   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:58.887906   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:58.887991   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:58.929039   66615 cri.go:89] found id: ""
	I0429 20:07:58.929072   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.929083   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:58.929092   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:58.929152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:58.965930   66615 cri.go:89] found id: ""
	I0429 20:07:58.965975   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.965983   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:58.965989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:58.966061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:59.005583   66615 cri.go:89] found id: ""
	I0429 20:07:59.005616   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.005628   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:59.005638   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:59.005697   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:59.047964   66615 cri.go:89] found id: ""
	I0429 20:07:59.047994   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.048007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:59.048014   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:59.048077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:59.091851   66615 cri.go:89] found id: ""
	I0429 20:07:59.091891   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.091904   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:59.091909   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:59.091978   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:59.134843   66615 cri.go:89] found id: ""
	I0429 20:07:59.134874   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.134881   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:59.134890   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:59.134907   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:59.219048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:59.219084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:59.267404   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:59.267436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:59.322264   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:59.322303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:59.339196   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:59.339235   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:59.441904   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.558660   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.057214   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.054473   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.550825   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.756683   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.759031   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.942998   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:01.957442   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:01.957502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:02.002240   66615 cri.go:89] found id: ""
	I0429 20:08:02.002271   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.002283   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:02.002291   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:02.002353   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:02.051506   66615 cri.go:89] found id: ""
	I0429 20:08:02.051535   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.051546   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:02.051552   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:02.051611   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:02.093194   66615 cri.go:89] found id: ""
	I0429 20:08:02.093234   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.093247   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:02.093254   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:02.093317   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:02.134988   66615 cri.go:89] found id: ""
	I0429 20:08:02.135016   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.135027   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:02.135034   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:02.135099   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:02.182954   66615 cri.go:89] found id: ""
	I0429 20:08:02.182982   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.182993   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:02.183000   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:02.183063   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:02.227778   66615 cri.go:89] found id: ""
	I0429 20:08:02.227807   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.227817   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:02.227826   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:02.227888   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:02.265593   66615 cri.go:89] found id: ""
	I0429 20:08:02.265624   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.265634   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:02.265641   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:02.265701   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:02.306520   66615 cri.go:89] found id: ""
	I0429 20:08:02.306550   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.306558   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:02.306566   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:02.306578   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:02.323806   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:02.323844   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:02.407110   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:02.407140   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:02.407153   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:02.493755   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:02.493791   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:02.538610   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:02.538640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:01.556084   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:03.556487   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:03.551788   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:05.553047   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:04.257831   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:06.756438   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:05.096630   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:05.111112   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:05.111173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:05.151237   66615 cri.go:89] found id: ""
	I0429 20:08:05.151268   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.151279   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:05.151286   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:05.151370   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:05.205344   66615 cri.go:89] found id: ""
	I0429 20:08:05.205379   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.205389   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:05.205396   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:05.205478   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:05.244394   66615 cri.go:89] found id: ""
	I0429 20:08:05.244426   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.244438   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:05.244445   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:05.244504   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:05.285320   66615 cri.go:89] found id: ""
	I0429 20:08:05.285343   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.285350   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:05.285356   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:05.285404   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:05.327618   66615 cri.go:89] found id: ""
	I0429 20:08:05.327645   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.327657   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:05.327664   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:05.327742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:05.369152   66615 cri.go:89] found id: ""
	I0429 20:08:05.369178   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.369194   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:05.369208   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:05.369277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:05.407206   66615 cri.go:89] found id: ""
	I0429 20:08:05.407234   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.407243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:05.407248   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:05.407299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:05.447404   66615 cri.go:89] found id: ""
	I0429 20:08:05.447438   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.447449   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:05.447459   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:05.447475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:05.529660   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:05.529700   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.582510   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:05.582565   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:05.639300   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:05.639351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:05.656825   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:05.656860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:05.730863   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.231635   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:08.247722   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:08.247811   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:08.298354   66615 cri.go:89] found id: ""
	I0429 20:08:08.298382   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.298395   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:08.298401   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:08.298459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:08.339497   66615 cri.go:89] found id: ""
	I0429 20:08:08.339536   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.339549   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:08.339556   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:08.339609   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:08.379665   66615 cri.go:89] found id: ""
	I0429 20:08:08.379695   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.379705   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:08.379712   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:08.379786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:08.419698   66615 cri.go:89] found id: ""
	I0429 20:08:08.419722   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.419732   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:08.419739   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:08.419798   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:08.463901   66615 cri.go:89] found id: ""
	I0429 20:08:08.463935   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.463946   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:08.463953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:08.464028   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:08.504568   66615 cri.go:89] found id: ""
	I0429 20:08:08.504603   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.504617   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:08.504626   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:08.504695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:08.545634   66615 cri.go:89] found id: ""
	I0429 20:08:08.545661   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.545671   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:08.545678   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:08.545741   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:08.586936   66615 cri.go:89] found id: ""
	I0429 20:08:08.586965   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.586976   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:08.586987   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:08.587003   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:08.641755   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:08.641794   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:08.659798   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:08.659845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:08.744265   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.744288   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:08.744303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:08.823813   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:08.823860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.557172   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:07.558538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:10.057841   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:08.049902   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:10.050576   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:12.051331   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:08.757300   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:11.257697   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:11.375600   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:11.396286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:11.396351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:11.442737   66615 cri.go:89] found id: ""
	I0429 20:08:11.442781   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.442789   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:11.442797   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:11.442865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:11.484131   66615 cri.go:89] found id: ""
	I0429 20:08:11.484158   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.484167   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:11.484172   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:11.484231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:11.526647   66615 cri.go:89] found id: ""
	I0429 20:08:11.526684   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.526695   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:11.526705   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:11.526777   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:11.572001   66615 cri.go:89] found id: ""
	I0429 20:08:11.572028   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.572036   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:11.572042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:11.572100   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:11.618980   66615 cri.go:89] found id: ""
	I0429 20:08:11.619003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.619011   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:11.619016   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:11.619077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:11.667079   66615 cri.go:89] found id: ""
	I0429 20:08:11.667107   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.667115   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:11.667123   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:11.667198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:11.707967   66615 cri.go:89] found id: ""
	I0429 20:08:11.708003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.708013   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:11.708020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:11.708073   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:11.753024   66615 cri.go:89] found id: ""
	I0429 20:08:11.753053   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.753062   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:11.753070   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:11.753081   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:11.820171   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:11.820210   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:11.852234   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:11.852263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:11.971060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:11.971085   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:11.971097   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:12.049797   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:12.049845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:14.601181   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:14.621413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:14.621496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:14.677453   66615 cri.go:89] found id: ""
	I0429 20:08:14.677486   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.677498   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:14.677504   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:14.677562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:14.720517   66615 cri.go:89] found id: ""
	I0429 20:08:14.720548   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.720560   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:14.720571   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:14.720636   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:14.770186   66615 cri.go:89] found id: ""
	I0429 20:08:14.770211   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.770219   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:14.770225   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:14.770301   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:14.815286   66615 cri.go:89] found id: ""
	I0429 20:08:14.815310   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.815320   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:14.815327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:14.815389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:14.862625   66615 cri.go:89] found id: ""
	I0429 20:08:14.862651   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.862662   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:14.862669   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:14.862726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:14.910517   66615 cri.go:89] found id: ""
	I0429 20:08:14.910554   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.910565   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:14.910572   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:14.910634   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:14.951085   66615 cri.go:89] found id: ""
	I0429 20:08:14.951110   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.951119   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:14.951124   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:14.951173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:12.558191   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:15.056987   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:14.051423   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:16.051632   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:13.757001   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:16.257425   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:14.991414   66615 cri.go:89] found id: ""
	I0429 20:08:14.991443   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.991455   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:14.991464   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:14.991476   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:15.047551   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:15.047583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:15.063667   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:15.063692   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:15.141744   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:15.141820   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:15.141841   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:15.225676   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:15.225722   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:17.774459   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:17.793137   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:17.793210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:17.856725   66615 cri.go:89] found id: ""
	I0429 20:08:17.856756   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.856767   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:17.856774   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:17.856835   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:17.916510   66615 cri.go:89] found id: ""
	I0429 20:08:17.916542   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.916554   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:17.916561   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:17.916646   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:17.970835   66615 cri.go:89] found id: ""
	I0429 20:08:17.970867   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.970877   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:17.970884   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:17.970948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:18.013324   66615 cri.go:89] found id: ""
	I0429 20:08:18.013353   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.013366   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:18.013384   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:18.013458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:18.062930   66615 cri.go:89] found id: ""
	I0429 20:08:18.062957   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.062968   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:18.062974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:18.063040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:18.111792   66615 cri.go:89] found id: ""
	I0429 20:08:18.111820   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.111829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:18.111834   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:18.111911   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:18.160096   66615 cri.go:89] found id: ""
	I0429 20:08:18.160121   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.160129   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:18.160135   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:18.160198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:18.204012   66615 cri.go:89] found id: ""
	I0429 20:08:18.204044   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.204052   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:18.204062   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:18.204074   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:18.284288   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:18.284337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:18.340746   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:18.340779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:18.397612   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:18.397652   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:18.413425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:18.413455   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:18.493598   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:17.058215   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:19.556308   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:18.551175   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:20.551292   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:22.551637   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:18.757370   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:21.259192   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:20.994339   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:21.010199   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:21.010289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:21.052190   66615 cri.go:89] found id: ""
	I0429 20:08:21.052219   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.052230   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:21.052237   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:21.052300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:21.090838   66615 cri.go:89] found id: ""
	I0429 20:08:21.090870   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.090882   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:21.090889   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:21.090953   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:21.137997   66615 cri.go:89] found id: ""
	I0429 20:08:21.138044   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.138056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:21.138082   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:21.138171   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:21.176278   66615 cri.go:89] found id: ""
	I0429 20:08:21.176311   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.176323   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:21.176331   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:21.176390   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:21.213925   66615 cri.go:89] found id: ""
	I0429 20:08:21.213955   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.213966   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:21.213973   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:21.214039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:21.253815   66615 cri.go:89] found id: ""
	I0429 20:08:21.253842   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.253850   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:21.253857   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:21.253905   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:21.296521   66615 cri.go:89] found id: ""
	I0429 20:08:21.296553   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.296565   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:21.296573   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:21.296633   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:21.337114   66615 cri.go:89] found id: ""
	I0429 20:08:21.337143   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.337150   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:21.337158   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:21.337177   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.384860   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:21.384901   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:21.443837   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:21.443899   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:21.460084   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:21.460116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:21.541230   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:21.541262   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:21.541278   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.132057   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:24.148381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:24.148458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:24.192469   66615 cri.go:89] found id: ""
	I0429 20:08:24.192499   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.192510   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:24.192516   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:24.192568   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:24.232150   66615 cri.go:89] found id: ""
	I0429 20:08:24.232177   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.232188   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:24.232195   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:24.232260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:24.272679   66615 cri.go:89] found id: ""
	I0429 20:08:24.272705   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.272714   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:24.272719   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:24.272772   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:24.317114   66615 cri.go:89] found id: ""
	I0429 20:08:24.317137   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.317145   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:24.317151   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:24.317200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:24.362251   66615 cri.go:89] found id: ""
	I0429 20:08:24.362279   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.362287   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:24.362294   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:24.362346   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:24.405696   66615 cri.go:89] found id: ""
	I0429 20:08:24.405721   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.405729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:24.405734   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:24.405828   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:24.446837   66615 cri.go:89] found id: ""
	I0429 20:08:24.446864   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.446871   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:24.446878   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:24.446929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:24.493416   66615 cri.go:89] found id: ""
	I0429 20:08:24.493445   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.493454   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:24.493462   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:24.493475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:24.555657   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:24.555693   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:24.572297   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:24.572328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:24.658463   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:24.658487   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:24.658499   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.752064   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:24.752103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.557948   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:24.056339   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:25.050530   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:27.554744   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:23.758156   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:26.261403   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:27.303812   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:27.319304   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:27.319373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:27.360473   66615 cri.go:89] found id: ""
	I0429 20:08:27.360509   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.360521   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:27.360529   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:27.360595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:27.404619   66615 cri.go:89] found id: ""
	I0429 20:08:27.404651   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.404668   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:27.404675   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:27.404742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:27.447464   66615 cri.go:89] found id: ""
	I0429 20:08:27.447490   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.447498   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:27.447503   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:27.447556   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:27.489197   66615 cri.go:89] found id: ""
	I0429 20:08:27.489235   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.489246   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:27.489253   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:27.489323   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:27.534354   66615 cri.go:89] found id: ""
	I0429 20:08:27.534387   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.534397   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:27.534404   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:27.534470   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:27.580721   66615 cri.go:89] found id: ""
	I0429 20:08:27.580751   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.580762   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:27.580769   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:27.580841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:27.620000   66615 cri.go:89] found id: ""
	I0429 20:08:27.620033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.620041   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:27.620046   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:27.620096   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:27.659000   66615 cri.go:89] found id: ""
	I0429 20:08:27.659033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.659041   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:27.659050   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:27.659062   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:27.739202   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:27.739241   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:27.784761   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:27.784807   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:27.842707   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:27.842748   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:27.859471   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:27.859498   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:27.942686   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:26.058098   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:28.059648   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.056692   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:32.550893   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:28.757412   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.759070   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.443410   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:30.460332   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:30.460417   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:30.497715   66615 cri.go:89] found id: ""
	I0429 20:08:30.497752   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.497764   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:30.497772   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:30.497841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:30.539376   66615 cri.go:89] found id: ""
	I0429 20:08:30.539409   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.539419   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:30.539426   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:30.539492   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:30.587567   66615 cri.go:89] found id: ""
	I0429 20:08:30.587596   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.587606   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:30.587616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:30.587679   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:30.626198   66615 cri.go:89] found id: ""
	I0429 20:08:30.626228   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.626238   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:30.626246   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:30.626313   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:30.665798   66615 cri.go:89] found id: ""
	I0429 20:08:30.665829   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.665837   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:30.665843   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:30.665909   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:30.708627   66615 cri.go:89] found id: ""
	I0429 20:08:30.708659   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.708671   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:30.708679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:30.708762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:30.754190   66615 cri.go:89] found id: ""
	I0429 20:08:30.754220   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.754230   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:30.754236   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:30.754295   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:30.797383   66615 cri.go:89] found id: ""
	I0429 20:08:30.797410   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.797421   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:30.797432   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:30.797447   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.843485   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:30.843512   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:30.900081   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:30.900118   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:30.916095   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:30.916125   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:30.995509   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:30.995529   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:30.995541   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:33.584596   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:33.600969   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:33.601058   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:33.643935   66615 cri.go:89] found id: ""
	I0429 20:08:33.643967   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.643979   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:33.643986   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:33.644049   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:33.681047   66615 cri.go:89] found id: ""
	I0429 20:08:33.681077   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.681085   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:33.681091   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:33.681160   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:33.726450   66615 cri.go:89] found id: ""
	I0429 20:08:33.726479   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.726490   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:33.726501   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:33.726561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:33.765237   66615 cri.go:89] found id: ""
	I0429 20:08:33.765264   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.765275   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:33.765281   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:33.765339   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:33.808333   66615 cri.go:89] found id: ""
	I0429 20:08:33.808366   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.808376   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:33.808383   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:33.808446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:33.854991   66615 cri.go:89] found id: ""
	I0429 20:08:33.855023   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.855034   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:33.855041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:33.855126   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:33.895405   66615 cri.go:89] found id: ""
	I0429 20:08:33.895434   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.895446   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:33.895455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:33.895521   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:33.937265   66615 cri.go:89] found id: ""
	I0429 20:08:33.937289   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.937297   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:33.937306   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:33.937324   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:33.991565   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:33.991594   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:34.006316   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:34.006343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:34.088734   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:34.088762   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:34.088776   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:34.180451   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:34.180489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.557020   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:33.058354   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:35.049638   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:37.051464   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:33.256955   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:35.257122   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:37.257629   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:36.727080   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:36.743038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:36.743124   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:36.785441   66615 cri.go:89] found id: ""
	I0429 20:08:36.785465   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.785475   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:36.785482   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:36.785542   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:36.828787   66615 cri.go:89] found id: ""
	I0429 20:08:36.828819   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.828829   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:36.828836   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:36.828896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:36.867712   66615 cri.go:89] found id: ""
	I0429 20:08:36.867738   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.867749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:36.867756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:36.867825   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:36.911435   66615 cri.go:89] found id: ""
	I0429 20:08:36.911462   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.911472   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:36.911478   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:36.911560   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:36.953803   66615 cri.go:89] found id: ""
	I0429 20:08:36.953828   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.953836   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:36.953842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:36.953903   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:36.990305   66615 cri.go:89] found id: ""
	I0429 20:08:36.990329   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.990339   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:36.990347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:36.990434   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:37.029177   66615 cri.go:89] found id: ""
	I0429 20:08:37.029206   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.029225   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:37.029232   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:37.029294   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:37.067583   66615 cri.go:89] found id: ""
	I0429 20:08:37.067605   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.067612   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:37.067619   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:37.067631   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:37.144739   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:37.144776   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:37.144788   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:37.227724   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:37.227762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:37.270383   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:37.270417   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:37.326858   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:37.326890   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:39.843323   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:39.859899   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:39.859961   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:39.903125   66615 cri.go:89] found id: ""
	I0429 20:08:39.903155   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.903164   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:39.903169   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:39.903243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:39.944271   66615 cri.go:89] found id: ""
	I0429 20:08:39.944300   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.944309   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:39.944314   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:39.944363   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:35.557115   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:38.056175   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.550339   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:42.048622   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.756355   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:42.255528   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.989934   66615 cri.go:89] found id: ""
	I0429 20:08:39.989964   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.989972   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:39.989978   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:39.990032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:40.025936   66615 cri.go:89] found id: ""
	I0429 20:08:40.025965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.025976   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:40.025983   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:40.026044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:40.065943   66615 cri.go:89] found id: ""
	I0429 20:08:40.065965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.065976   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:40.065984   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:40.066038   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:40.109986   66615 cri.go:89] found id: ""
	I0429 20:08:40.110018   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.110030   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:40.110038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:40.110115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:40.155610   66615 cri.go:89] found id: ""
	I0429 20:08:40.155716   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.155734   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:40.155745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:40.155803   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:40.196213   66615 cri.go:89] found id: ""
	I0429 20:08:40.196239   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.196246   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:40.196256   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:40.196272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:40.280330   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:40.280372   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:40.326774   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:40.326810   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:40.379438   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:40.379475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:40.395332   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:40.395362   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:40.504413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:43.005046   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:43.020464   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:43.020544   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:43.066403   66615 cri.go:89] found id: ""
	I0429 20:08:43.066432   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.066444   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:43.066452   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:43.066548   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:43.109732   66615 cri.go:89] found id: ""
	I0429 20:08:43.109760   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.109771   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:43.109778   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:43.109850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:43.158457   66615 cri.go:89] found id: ""
	I0429 20:08:43.158483   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.158492   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:43.158498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:43.158561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:43.207170   66615 cri.go:89] found id: ""
	I0429 20:08:43.207201   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.207213   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:43.207221   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:43.207281   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:43.246746   66615 cri.go:89] found id: ""
	I0429 20:08:43.246783   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.246804   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:43.246811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:43.246875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:43.292786   66615 cri.go:89] found id: ""
	I0429 20:08:43.292813   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.292824   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:43.292831   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:43.292896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:43.337509   66615 cri.go:89] found id: ""
	I0429 20:08:43.337537   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.337546   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:43.337551   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:43.337601   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:43.378446   66615 cri.go:89] found id: ""
	I0429 20:08:43.378473   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.378481   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:43.378490   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:43.378502   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:43.460438   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:43.460474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:43.503908   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:43.503945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:43.561661   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:43.561699   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:43.577924   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:43.577954   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:43.667006   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:40.555875   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:43.057183   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:44.049342   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.049873   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:44.256458   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.256554   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.168175   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:46.212494   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:46.212579   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:46.251567   66615 cri.go:89] found id: ""
	I0429 20:08:46.251593   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.251603   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:46.251610   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:46.251673   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:46.291913   66615 cri.go:89] found id: ""
	I0429 20:08:46.291943   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.291955   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:46.291962   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:46.292023   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:46.331801   66615 cri.go:89] found id: ""
	I0429 20:08:46.331827   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.331836   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:46.331842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:46.331899   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:46.375956   66615 cri.go:89] found id: ""
	I0429 20:08:46.375989   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.376001   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:46.376008   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:46.376090   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:46.425572   66615 cri.go:89] found id: ""
	I0429 20:08:46.425599   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.425609   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:46.425618   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:46.425681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:46.468161   66615 cri.go:89] found id: ""
	I0429 20:08:46.468226   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.468249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:46.468263   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:46.468433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:46.512163   66615 cri.go:89] found id: ""
	I0429 20:08:46.512193   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.512205   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:46.512212   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:46.512277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:46.556047   66615 cri.go:89] found id: ""
	I0429 20:08:46.556078   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.556088   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:46.556099   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:46.556111   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:46.609886   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:46.609921   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:46.625848   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:46.625878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:46.699005   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:46.699037   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:46.699053   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:46.783886   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:46.783923   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.331288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:49.344805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:49.344864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:49.381576   66615 cri.go:89] found id: ""
	I0429 20:08:49.381598   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.381605   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:49.381619   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:49.381667   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:49.418276   66615 cri.go:89] found id: ""
	I0429 20:08:49.418316   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.418329   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:49.418336   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:49.418389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:49.460147   66615 cri.go:89] found id: ""
	I0429 20:08:49.460177   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.460188   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:49.460195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:49.460253   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:49.500534   66615 cri.go:89] found id: ""
	I0429 20:08:49.500562   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.500569   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:49.500575   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:49.500632   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:49.538481   66615 cri.go:89] found id: ""
	I0429 20:08:49.538521   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.538534   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:49.538541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:49.538603   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:49.580192   66615 cri.go:89] found id: ""
	I0429 20:08:49.580218   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.580228   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:49.580234   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:49.580299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:49.616400   66615 cri.go:89] found id: ""
	I0429 20:08:49.616427   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.616437   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:49.616444   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:49.616551   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:49.652871   66615 cri.go:89] found id: ""
	I0429 20:08:49.652900   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.652918   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:49.652931   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:49.652947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:49.728173   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:49.728200   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:49.728212   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:49.813701   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:49.813749   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.855685   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:49.855712   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:49.906480   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:49.906514   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:45.559939   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.056008   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.056054   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.052578   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.550638   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.550910   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.257460   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.259418   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.757365   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.422430   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:52.437412   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:52.437488   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:52.476896   66615 cri.go:89] found id: ""
	I0429 20:08:52.476919   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.476927   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:52.476932   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:52.476976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:52.517266   66615 cri.go:89] found id: ""
	I0429 20:08:52.517298   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.517310   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:52.517318   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:52.517381   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:52.560886   66615 cri.go:89] found id: ""
	I0429 20:08:52.560909   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.560917   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:52.560922   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:52.560969   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:52.601362   66615 cri.go:89] found id: ""
	I0429 20:08:52.601398   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.601419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:52.601429   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:52.601506   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:52.639544   66615 cri.go:89] found id: ""
	I0429 20:08:52.639580   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.639591   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:52.639599   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:52.639652   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:52.681088   66615 cri.go:89] found id: ""
	I0429 20:08:52.681120   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.681130   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:52.681138   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:52.681204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:52.721777   66615 cri.go:89] found id: ""
	I0429 20:08:52.721802   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.721820   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:52.721828   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:52.721900   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:52.762823   66615 cri.go:89] found id: ""
	I0429 20:08:52.762845   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.762856   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:52.762863   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:52.762875   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:52.819291   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:52.819326   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:52.847120   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:52.847165   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:52.956274   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:52.956301   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:52.956317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:53.041636   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:53.041676   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:52.056558   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:54.555745   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.051656   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:57.549668   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.257083   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:57.757855   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.592636   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:55.607372   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:55.607449   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:55.643959   66615 cri.go:89] found id: ""
	I0429 20:08:55.643991   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.644000   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:55.644005   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:55.644061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:55.682272   66615 cri.go:89] found id: ""
	I0429 20:08:55.682304   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.682315   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:55.682323   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:55.682384   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:55.720157   66615 cri.go:89] found id: ""
	I0429 20:08:55.720189   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.720200   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:55.720207   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:55.720272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:55.761748   66615 cri.go:89] found id: ""
	I0429 20:08:55.761773   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.761781   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:55.761786   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:55.761842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:55.802377   66615 cri.go:89] found id: ""
	I0429 20:08:55.802405   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.802416   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:55.802423   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:55.802494   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:55.838986   66615 cri.go:89] found id: ""
	I0429 20:08:55.839016   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.839024   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:55.839030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:55.839077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:55.874991   66615 cri.go:89] found id: ""
	I0429 20:08:55.875022   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.875032   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:55.875039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:55.875106   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:55.913561   66615 cri.go:89] found id: ""
	I0429 20:08:55.913595   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.913607   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:55.913618   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:55.913633   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:55.965355   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:55.965391   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:55.981222   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:55.981259   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:56.056656   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:56.056685   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:56.056701   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.135276   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:56.135309   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:58.682855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:58.701679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:58.701769   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:58.760807   66615 cri.go:89] found id: ""
	I0429 20:08:58.760828   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.760841   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:58.760858   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:58.760910   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:58.835167   66615 cri.go:89] found id: ""
	I0429 20:08:58.835204   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.835216   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:58.835223   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:58.835289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:58.877367   66615 cri.go:89] found id: ""
	I0429 20:08:58.877398   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.877409   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:58.877417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:58.877483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:58.923726   66615 cri.go:89] found id: ""
	I0429 20:08:58.923751   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.923760   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:58.923766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:58.923817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:58.967780   66615 cri.go:89] found id: ""
	I0429 20:08:58.967804   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.967811   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:58.967816   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:58.967865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:59.010646   66615 cri.go:89] found id: ""
	I0429 20:08:59.010682   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.010690   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:59.010697   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:59.010759   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:59.057380   66615 cri.go:89] found id: ""
	I0429 20:08:59.057408   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.057418   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:59.057426   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:59.057483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:59.099669   66615 cri.go:89] found id: ""
	I0429 20:08:59.099698   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.099706   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:59.099715   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:59.099731   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:59.146831   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:59.146861   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:59.204232   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:59.204274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:59.219799   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:59.219824   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:59.305438   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:59.305465   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:59.305481   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.555976   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:58.557892   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:00.049511   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:02.050709   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:00.256064   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:02.257053   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:01.885861   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:01.900746   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:01.900808   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:01.942174   66615 cri.go:89] found id: ""
	I0429 20:09:01.942210   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.942218   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:01.942224   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:01.942285   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:01.986463   66615 cri.go:89] found id: ""
	I0429 20:09:01.986491   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.986502   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:01.986509   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:01.986570   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:02.026290   66615 cri.go:89] found id: ""
	I0429 20:09:02.026314   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.026321   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:02.026327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:02.026375   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:02.064239   66615 cri.go:89] found id: ""
	I0429 20:09:02.064259   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.064266   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:02.064271   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:02.064321   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:02.105807   66615 cri.go:89] found id: ""
	I0429 20:09:02.105838   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.105857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:02.105866   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:02.105926   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:02.144939   66615 cri.go:89] found id: ""
	I0429 20:09:02.144962   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.144970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:02.144975   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:02.145037   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:02.192866   66615 cri.go:89] found id: ""
	I0429 20:09:02.192891   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.192899   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:02.192905   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:02.192955   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:02.232485   66615 cri.go:89] found id: ""
	I0429 20:09:02.232515   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.232524   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:02.232533   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:02.232550   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:02.287374   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:02.287402   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:02.302979   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:02.303009   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:02.380693   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:02.380713   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:02.380725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:02.467048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:02.467084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:01.055311   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:03.055538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:05.056325   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:04.051014   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:06.556497   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:04.758329   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:07.256328   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:05.018176   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:05.033178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:05.033238   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:05.079008   66615 cri.go:89] found id: ""
	I0429 20:09:05.079034   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.079043   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:05.079050   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:05.079113   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:05.118620   66615 cri.go:89] found id: ""
	I0429 20:09:05.118642   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.118650   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:05.118655   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:05.118714   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:05.159603   66615 cri.go:89] found id: ""
	I0429 20:09:05.159646   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.159660   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:05.159666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:05.159733   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:05.200224   66615 cri.go:89] found id: ""
	I0429 20:09:05.200252   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.200262   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:05.200270   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:05.200344   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:05.246341   66615 cri.go:89] found id: ""
	I0429 20:09:05.246384   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.246396   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:05.246403   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:05.246471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:05.286126   66615 cri.go:89] found id: ""
	I0429 20:09:05.286153   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.286163   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:05.286171   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:05.286235   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:05.326911   66615 cri.go:89] found id: ""
	I0429 20:09:05.326941   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.326952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:05.326958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:05.327019   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:05.365564   66615 cri.go:89] found id: ""
	I0429 20:09:05.365592   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.365602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:05.365621   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:05.365637   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:05.445857   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:05.445877   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:05.445889   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:05.530129   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:05.530164   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:05.573936   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:05.573971   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:05.631263   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:05.631299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.147288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:08.162949   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:08.163021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:08.203009   66615 cri.go:89] found id: ""
	I0429 20:09:08.203033   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.203041   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:08.203047   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:08.203112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:08.241708   66615 cri.go:89] found id: ""
	I0429 20:09:08.241735   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.241744   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:08.241750   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:08.241801   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:08.283976   66615 cri.go:89] found id: ""
	I0429 20:09:08.284005   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.284017   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:08.284023   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:08.284091   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:08.323909   66615 cri.go:89] found id: ""
	I0429 20:09:08.323939   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.323951   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:08.323962   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:08.324031   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:08.363236   66615 cri.go:89] found id: ""
	I0429 20:09:08.363263   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.363271   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:08.363276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:08.363328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:08.401767   66615 cri.go:89] found id: ""
	I0429 20:09:08.401790   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.401798   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:08.401803   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:08.401851   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:08.443678   66615 cri.go:89] found id: ""
	I0429 20:09:08.443709   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.443726   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:08.443731   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:08.443791   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:08.489025   66615 cri.go:89] found id: ""
	I0429 20:09:08.489069   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.489103   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:08.489129   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:08.489163   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:08.543421   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:08.543462   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.560425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:08.560459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:08.642819   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:08.642840   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:08.642855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:08.726644   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:08.726682   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:07.555523   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.556138   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.049664   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.050246   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.256452   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.257458   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.277817   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:11.292340   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:11.292420   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:11.330721   66615 cri.go:89] found id: ""
	I0429 20:09:11.330756   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.330768   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:11.330776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:11.330850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:11.372057   66615 cri.go:89] found id: ""
	I0429 20:09:11.372089   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.372098   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:11.372103   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:11.372155   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:11.414786   66615 cri.go:89] found id: ""
	I0429 20:09:11.414814   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.414825   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:11.414832   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:11.414898   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:11.454934   66615 cri.go:89] found id: ""
	I0429 20:09:11.454961   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.454969   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:11.454974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:11.455039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:11.494169   66615 cri.go:89] found id: ""
	I0429 20:09:11.494200   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.494211   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:11.494217   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:11.494277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:11.541646   66615 cri.go:89] found id: ""
	I0429 20:09:11.541684   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.541694   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:11.541701   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:11.541766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:11.584025   66615 cri.go:89] found id: ""
	I0429 20:09:11.584055   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.584067   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:11.584075   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:11.584138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:11.622425   66615 cri.go:89] found id: ""
	I0429 20:09:11.622459   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.622471   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:11.622481   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:11.622493   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:11.676416   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:11.676450   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:11.693793   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:11.693822   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:11.771410   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:11.771437   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:11.771454   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:11.854969   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:11.855047   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:14.398871   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:14.415894   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:14.415983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:14.454718   66615 cri.go:89] found id: ""
	I0429 20:09:14.454752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.454763   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:14.454773   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:14.454836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:14.498562   66615 cri.go:89] found id: ""
	I0429 20:09:14.498591   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.498602   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:14.498609   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:14.498669   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:14.536357   66615 cri.go:89] found id: ""
	I0429 20:09:14.536384   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.536395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:14.536402   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:14.536460   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:14.577240   66615 cri.go:89] found id: ""
	I0429 20:09:14.577274   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.577284   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:14.577291   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:14.577372   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:14.617231   66615 cri.go:89] found id: ""
	I0429 20:09:14.617266   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.617279   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:14.617287   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:14.617355   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:14.659053   66615 cri.go:89] found id: ""
	I0429 20:09:14.659081   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.659090   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:14.659096   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:14.659145   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:14.708723   66615 cri.go:89] found id: ""
	I0429 20:09:14.708752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.708760   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:14.708766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:14.708814   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:14.753732   66615 cri.go:89] found id: ""
	I0429 20:09:14.753762   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.753773   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:14.753783   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:14.753798   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:14.771952   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:14.771985   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:14.842649   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:14.842680   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:14.842696   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:14.925565   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:14.925603   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:11.556903   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:14.057196   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:13.550999   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:16.054439   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:13.257735   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:15.756651   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:17.756760   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:14.975731   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:14.975765   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.528872   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:17.544373   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:17.544455   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:17.582977   66615 cri.go:89] found id: ""
	I0429 20:09:17.583001   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.583009   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:17.583014   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:17.583079   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:17.620322   66615 cri.go:89] found id: ""
	I0429 20:09:17.620352   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.620368   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:17.620373   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:17.620421   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:17.664339   66615 cri.go:89] found id: ""
	I0429 20:09:17.664367   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.664375   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:17.664381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:17.664433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:17.705150   66615 cri.go:89] found id: ""
	I0429 20:09:17.705175   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.705184   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:17.705189   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:17.705239   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:17.749713   66615 cri.go:89] found id: ""
	I0429 20:09:17.749738   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.749747   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:17.749752   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:17.749850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:17.791528   66615 cri.go:89] found id: ""
	I0429 20:09:17.791552   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.791560   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:17.791566   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:17.791615   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:17.834994   66615 cri.go:89] found id: ""
	I0429 20:09:17.835024   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.835035   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:17.835050   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:17.835107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:17.872194   66615 cri.go:89] found id: ""
	I0429 20:09:17.872226   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.872236   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:17.872248   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:17.872263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.926899   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:17.926936   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:17.944184   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:17.944218   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:18.029224   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:18.029246   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:18.029258   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:18.111112   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:18.111147   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:16.557282   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:19.056682   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:18.549106   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:20.550026   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:19.758897   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:22.257104   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:20.655965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:20.671420   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:20.671487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:20.710100   66615 cri.go:89] found id: ""
	I0429 20:09:20.710132   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.710144   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:20.710151   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:20.710221   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:20.748849   66615 cri.go:89] found id: ""
	I0429 20:09:20.748877   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.748888   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:20.748894   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:20.748956   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:20.788113   66615 cri.go:89] found id: ""
	I0429 20:09:20.788140   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.788151   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:20.788157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:20.788217   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:20.831432   66615 cri.go:89] found id: ""
	I0429 20:09:20.831455   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.831462   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:20.831470   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:20.831518   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:20.878156   66615 cri.go:89] found id: ""
	I0429 20:09:20.878183   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.878191   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:20.878197   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:20.878262   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:20.920691   66615 cri.go:89] found id: ""
	I0429 20:09:20.920718   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.920729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:20.920735   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:20.920795   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:20.960674   66615 cri.go:89] found id: ""
	I0429 20:09:20.960709   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.960719   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:20.960726   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:20.960786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:21.006462   66615 cri.go:89] found id: ""
	I0429 20:09:21.006486   66615 logs.go:276] 0 containers: []
	W0429 20:09:21.006495   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:21.006503   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:21.006518   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.060040   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:21.060076   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:21.077141   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:21.077171   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:21.157058   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:21.157083   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:21.157096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:21.265626   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:21.265662   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:23.813718   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:23.828338   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:23.828400   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:23.868730   66615 cri.go:89] found id: ""
	I0429 20:09:23.868760   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.868771   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:23.868776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:23.868842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:23.907919   66615 cri.go:89] found id: ""
	I0429 20:09:23.907941   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.907949   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:23.907956   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:23.908011   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:23.956769   66615 cri.go:89] found id: ""
	I0429 20:09:23.956794   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.956805   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:23.956811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:23.956875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:23.998578   66615 cri.go:89] found id: ""
	I0429 20:09:23.998612   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.998621   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:23.998628   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:23.998681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:24.037458   66615 cri.go:89] found id: ""
	I0429 20:09:24.037485   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.037492   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:24.037499   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:24.037562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:24.078305   66615 cri.go:89] found id: ""
	I0429 20:09:24.078336   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.078351   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:24.078358   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:24.078418   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:24.120100   66615 cri.go:89] found id: ""
	I0429 20:09:24.120129   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.120139   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:24.120147   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:24.120211   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:24.160953   66615 cri.go:89] found id: ""
	I0429 20:09:24.160988   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.161000   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:24.161012   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:24.161029   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:24.176654   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:24.176686   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:24.256631   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:24.256652   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:24.256668   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:24.335379   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:24.335424   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:24.379616   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:24.379649   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.556726   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:24.057483   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:23.050004   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:25.550882   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:27.551051   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:24.257726   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:26.757098   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:26.937283   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:26.956185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:26.956252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:26.997000   66615 cri.go:89] found id: ""
	I0429 20:09:26.997034   66615 logs.go:276] 0 containers: []
	W0429 20:09:26.997046   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:26.997053   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:26.997115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:27.042494   66615 cri.go:89] found id: ""
	I0429 20:09:27.042527   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.042538   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:27.042546   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:27.042608   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:27.086170   66615 cri.go:89] found id: ""
	I0429 20:09:27.086199   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.086211   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:27.086218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:27.086282   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:27.126502   66615 cri.go:89] found id: ""
	I0429 20:09:27.126531   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.126542   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:27.126560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:27.126635   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:27.175102   66615 cri.go:89] found id: ""
	I0429 20:09:27.175134   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.175142   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:27.175148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:27.175216   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:27.215983   66615 cri.go:89] found id: ""
	I0429 20:09:27.216013   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.216025   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:27.216033   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:27.216097   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:27.256427   66615 cri.go:89] found id: ""
	I0429 20:09:27.256456   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.256467   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:27.256474   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:27.256540   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:27.298444   66615 cri.go:89] found id: ""
	I0429 20:09:27.298479   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.298490   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:27.298501   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:27.298517   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:27.381579   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:27.381625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:27.429304   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:27.429350   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:27.483044   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:27.483082   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:27.500304   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:27.500332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:27.583909   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:26.555285   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:28.560544   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:30.049769   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:32.050537   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:29.256689   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:31.257554   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:30.084904   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:30.102417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:30.102486   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:30.146726   66615 cri.go:89] found id: ""
	I0429 20:09:30.146748   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.146755   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:30.146761   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:30.146809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:30.190739   66615 cri.go:89] found id: ""
	I0429 20:09:30.190768   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.190780   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:30.190788   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:30.190853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:30.228836   66615 cri.go:89] found id: ""
	I0429 20:09:30.228864   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.228879   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:30.228887   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:30.228951   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:30.270876   66615 cri.go:89] found id: ""
	I0429 20:09:30.270912   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.270920   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:30.270925   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:30.270995   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:30.310762   66615 cri.go:89] found id: ""
	I0429 20:09:30.310787   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.310795   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:30.310801   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:30.310850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:30.356339   66615 cri.go:89] found id: ""
	I0429 20:09:30.356363   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.356371   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:30.356376   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:30.356430   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:30.395540   66615 cri.go:89] found id: ""
	I0429 20:09:30.395575   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.395589   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:30.395598   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:30.395671   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:30.446237   66615 cri.go:89] found id: ""
	I0429 20:09:30.446263   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.446276   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:30.446286   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:30.446301   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:30.537309   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:30.537334   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:30.537349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:30.629116   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:30.629151   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:30.683308   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:30.683337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:30.735879   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:30.735910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.252322   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:33.268276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:33.268351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:33.309531   66615 cri.go:89] found id: ""
	I0429 20:09:33.309622   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.309641   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:33.309650   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:33.309719   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:33.367480   66615 cri.go:89] found id: ""
	I0429 20:09:33.367515   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.367527   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:33.367535   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:33.367595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:33.433717   66615 cri.go:89] found id: ""
	I0429 20:09:33.433742   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.433751   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:33.433756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:33.433820   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:33.484053   66615 cri.go:89] found id: ""
	I0429 20:09:33.484081   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.484093   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:33.484100   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:33.484165   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:33.524103   66615 cri.go:89] found id: ""
	I0429 20:09:33.524126   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.524136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:33.524143   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:33.524204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:33.565692   66615 cri.go:89] found id: ""
	I0429 20:09:33.565711   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.565719   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:33.565724   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:33.565784   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:33.607119   66615 cri.go:89] found id: ""
	I0429 20:09:33.607143   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.607153   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:33.607160   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:33.607225   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:33.648407   66615 cri.go:89] found id: ""
	I0429 20:09:33.648432   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.648440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:33.648449   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:33.648463   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:33.730744   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:33.730781   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:33.774295   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:33.774328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:33.829609   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:33.829653   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.846048   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:33.846092   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:33.924413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:31.056307   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:33.056538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:34.548872   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.550765   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:33.758571   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.257361   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.425072   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:36.440185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:36.440268   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:36.484364   66615 cri.go:89] found id: ""
	I0429 20:09:36.484386   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.484394   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:36.484400   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:36.484450   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:36.520436   66615 cri.go:89] found id: ""
	I0429 20:09:36.520466   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.520478   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:36.520487   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:36.520549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:36.563597   66615 cri.go:89] found id: ""
	I0429 20:09:36.563622   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.563630   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:36.563635   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:36.563704   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:36.613106   66615 cri.go:89] found id: ""
	I0429 20:09:36.613134   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.613143   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:36.613148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:36.613204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:36.658127   66615 cri.go:89] found id: ""
	I0429 20:09:36.658151   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.658159   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:36.658166   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:36.658229   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:36.707388   66615 cri.go:89] found id: ""
	I0429 20:09:36.707415   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.707423   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:36.707430   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:36.707479   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:36.753363   66615 cri.go:89] found id: ""
	I0429 20:09:36.753394   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.753405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:36.753413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:36.753475   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:36.801492   66615 cri.go:89] found id: ""
	I0429 20:09:36.801513   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.801521   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:36.801530   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:36.801542   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:36.857055   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:36.857108   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:36.874567   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:36.874595   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:36.956176   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:36.956202   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:36.956217   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:37.039958   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:37.039997   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:39.591442   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:39.607842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:39.607927   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:39.651917   66615 cri.go:89] found id: ""
	I0429 20:09:39.651941   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.651948   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:39.651955   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:39.652020   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:39.690032   66615 cri.go:89] found id: ""
	I0429 20:09:39.690059   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.690078   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:39.690086   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:39.690152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:39.733176   66615 cri.go:89] found id: ""
	I0429 20:09:39.733200   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.733209   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:39.733215   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:39.733261   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:39.779528   66615 cri.go:89] found id: ""
	I0429 20:09:39.779560   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.779572   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:39.779581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:39.779650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:39.822408   66615 cri.go:89] found id: ""
	I0429 20:09:39.822436   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.822445   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:39.822452   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:39.822522   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:39.864895   66615 cri.go:89] found id: ""
	I0429 20:09:39.864922   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.864930   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:39.864938   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:39.865008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:39.907498   66615 cri.go:89] found id: ""
	I0429 20:09:39.907523   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.907533   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:39.907539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:39.907606   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:39.948400   66615 cri.go:89] found id: ""
	I0429 20:09:39.948430   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.948440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:39.948449   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:39.948465   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:35.557262   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:38.056877   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:40.058568   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:39.049938   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:41.050139   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:38.756883   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:41.256775   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:39.964733   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:39.964763   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:40.043568   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:40.043593   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:40.043609   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:40.130776   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:40.130815   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:40.182011   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:40.182042   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:42.739068   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:42.756144   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:42.756286   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:42.798776   66615 cri.go:89] found id: ""
	I0429 20:09:42.798801   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.798810   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:42.798815   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:42.798861   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:42.837122   66615 cri.go:89] found id: ""
	I0429 20:09:42.837146   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.837154   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:42.837159   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:42.837205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:42.875435   66615 cri.go:89] found id: ""
	I0429 20:09:42.875461   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.875471   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:42.875479   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:42.875536   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:42.920044   66615 cri.go:89] found id: ""
	I0429 20:09:42.920076   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.920087   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:42.920094   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:42.920175   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:42.960122   66615 cri.go:89] found id: ""
	I0429 20:09:42.960152   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.960163   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:42.960169   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:42.960215   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:42.999784   66615 cri.go:89] found id: ""
	I0429 20:09:42.999811   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.999829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:42.999837   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:42.999917   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:43.040882   66615 cri.go:89] found id: ""
	I0429 20:09:43.040930   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.040952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:43.040959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:43.041044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:43.082596   66615 cri.go:89] found id: ""
	I0429 20:09:43.082627   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.082639   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:43.082650   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:43.082672   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:43.140302   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:43.140343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:43.157508   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:43.157547   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:43.241025   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:43.241047   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:43.241061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:43.325820   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:43.325855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:42.058727   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:44.556415   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:43.051020   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.550017   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:43.258400   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.756441   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:47.757029   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.871561   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:45.887323   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:45.887398   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:45.930021   66615 cri.go:89] found id: ""
	I0429 20:09:45.930050   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.930062   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:45.930088   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:45.930148   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:45.971404   66615 cri.go:89] found id: ""
	I0429 20:09:45.971434   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.971445   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:45.971452   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:45.971513   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:46.018801   66615 cri.go:89] found id: ""
	I0429 20:09:46.018825   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.018833   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:46.018838   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:46.018886   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:46.065118   66615 cri.go:89] found id: ""
	I0429 20:09:46.065140   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.065148   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:46.065153   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:46.065201   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:46.105244   66615 cri.go:89] found id: ""
	I0429 20:09:46.105271   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.105294   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:46.105309   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:46.105373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:46.153736   66615 cri.go:89] found id: ""
	I0429 20:09:46.153759   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.153768   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:46.153773   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:46.153836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:46.198940   66615 cri.go:89] found id: ""
	I0429 20:09:46.198965   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.198973   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:46.198979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:46.199064   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:46.238001   66615 cri.go:89] found id: ""
	I0429 20:09:46.238031   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.238044   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:46.238056   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:46.238087   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:46.292309   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:46.292357   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:46.307243   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:46.307274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:46.386832   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.386852   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:46.386869   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:46.468856   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:46.468891   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.017354   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:49.032753   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:49.032832   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:49.075345   66615 cri.go:89] found id: ""
	I0429 20:09:49.075375   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.075388   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:49.075394   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:49.075447   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:49.115294   66615 cri.go:89] found id: ""
	I0429 20:09:49.115328   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.115339   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:49.115347   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:49.115412   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:49.164115   66615 cri.go:89] found id: ""
	I0429 20:09:49.164140   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.164148   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:49.164154   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:49.164210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:49.207643   66615 cri.go:89] found id: ""
	I0429 20:09:49.207668   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.207679   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:49.207698   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:49.207762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:49.247121   66615 cri.go:89] found id: ""
	I0429 20:09:49.247147   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.247156   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:49.247162   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:49.247220   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:49.288594   66615 cri.go:89] found id: ""
	I0429 20:09:49.288626   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.288636   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:49.288643   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:49.288711   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:49.330243   66615 cri.go:89] found id: ""
	I0429 20:09:49.330273   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.330290   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:49.330300   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:49.330365   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:49.371304   66615 cri.go:89] found id: ""
	I0429 20:09:49.371348   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.371360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:49.371372   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:49.371392   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:49.450910   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:49.450949   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.494940   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:49.494970   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:49.553320   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:49.553364   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:49.568850   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:49.568878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:49.644932   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.559246   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:49.056790   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:48.050285   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:50.050579   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.549882   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:49.757113   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.258680   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.145702   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:52.162681   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:52.162756   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:52.204816   66615 cri.go:89] found id: ""
	I0429 20:09:52.204858   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.204870   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:52.204888   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:52.204963   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:52.248481   66615 cri.go:89] found id: ""
	I0429 20:09:52.248510   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.248519   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:52.248525   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:52.248596   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:52.289158   66615 cri.go:89] found id: ""
	I0429 20:09:52.289186   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.289194   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:52.289200   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:52.289260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:52.329905   66615 cri.go:89] found id: ""
	I0429 20:09:52.329931   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.329942   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:52.329950   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:52.330025   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:52.372523   66615 cri.go:89] found id: ""
	I0429 20:09:52.372546   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.372554   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:52.372560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:52.372623   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:52.414936   66615 cri.go:89] found id: ""
	I0429 20:09:52.414970   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.414982   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:52.414989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:52.415056   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:52.454139   66615 cri.go:89] found id: ""
	I0429 20:09:52.454164   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.454172   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:52.454178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:52.454236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:52.494093   66615 cri.go:89] found id: ""
	I0429 20:09:52.494129   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.494142   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:52.494155   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:52.494195   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:52.552104   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:52.552142   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:52.568430   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:52.568459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:52.649708   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:52.649736   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:52.649752   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:52.746231   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:52.746272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:51.057536   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:53.556862   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:55.049835   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:57.050606   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:54.759308   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:57.256396   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:55.296228   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:55.311257   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:55.311328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:55.352071   66615 cri.go:89] found id: ""
	I0429 20:09:55.352098   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.352109   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:55.352116   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:55.352177   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:55.399806   66615 cri.go:89] found id: ""
	I0429 20:09:55.399837   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.399847   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:55.399860   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:55.399947   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:55.444372   66615 cri.go:89] found id: ""
	I0429 20:09:55.444398   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.444406   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:55.444411   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:55.444468   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:55.485542   66615 cri.go:89] found id: ""
	I0429 20:09:55.485568   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.485579   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:55.485586   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:55.485670   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:55.535452   66615 cri.go:89] found id: ""
	I0429 20:09:55.535483   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.535494   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:55.535502   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:55.535566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:55.578009   66615 cri.go:89] found id: ""
	I0429 20:09:55.578036   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.578048   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:55.578056   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:55.578138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:55.618302   66615 cri.go:89] found id: ""
	I0429 20:09:55.618336   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.618347   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:55.618355   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:55.618419   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:55.660489   66615 cri.go:89] found id: ""
	I0429 20:09:55.660518   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.660526   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:55.660535   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:55.660548   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:55.713953   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:55.713993   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:55.729624   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:55.729656   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:55.813718   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:55.813746   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:55.813762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:55.898805   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:55.898849   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:58.467014   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:58.482852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:58.482925   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:58.522862   66615 cri.go:89] found id: ""
	I0429 20:09:58.522896   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.522908   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:58.522916   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:58.523000   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:58.568234   66615 cri.go:89] found id: ""
	I0429 20:09:58.568259   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.568266   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:58.568272   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:58.568327   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:58.609147   66615 cri.go:89] found id: ""
	I0429 20:09:58.609175   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.609185   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:58.609192   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:58.609265   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:58.657074   66615 cri.go:89] found id: ""
	I0429 20:09:58.657104   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.657115   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:58.657122   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:58.657186   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:58.706819   66615 cri.go:89] found id: ""
	I0429 20:09:58.706846   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.706857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:58.706865   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:58.706929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:58.754967   66615 cri.go:89] found id: ""
	I0429 20:09:58.754998   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.755007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:58.755018   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:58.755078   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:58.793657   66615 cri.go:89] found id: ""
	I0429 20:09:58.793694   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.793704   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:58.793709   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:58.793766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:58.832023   66615 cri.go:89] found id: ""
	I0429 20:09:58.832055   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.832066   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:58.832078   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:58.832094   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:58.886568   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:58.886605   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:58.902126   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:58.902154   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:58.986786   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:58.986814   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:58.986831   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:59.072258   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:59.072296   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:55.557245   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:58.056570   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:59.549825   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:02.050651   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:59.756493   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:01.756935   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:01.620172   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:01.636958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:01.637055   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:01.703865   66615 cri.go:89] found id: ""
	I0429 20:10:01.703890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.703899   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:01.703905   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:01.703950   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:01.742655   66615 cri.go:89] found id: ""
	I0429 20:10:01.742684   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.742692   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:01.742707   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:01.742778   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:01.782866   66615 cri.go:89] found id: ""
	I0429 20:10:01.782890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.782901   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:01.782908   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:01.782964   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:01.822958   66615 cri.go:89] found id: ""
	I0429 20:10:01.822984   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.822992   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:01.822997   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:01.823044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:01.868581   66615 cri.go:89] found id: ""
	I0429 20:10:01.868604   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.868612   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:01.868622   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:01.868675   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:01.908216   66615 cri.go:89] found id: ""
	I0429 20:10:01.908241   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.908249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:01.908255   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:01.908328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:01.953100   66615 cri.go:89] found id: ""
	I0429 20:10:01.953131   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.953142   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:01.953150   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:01.953213   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:01.999940   66615 cri.go:89] found id: ""
	I0429 20:10:01.999974   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.999988   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:01.999999   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:02.000012   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:02.061669   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:02.061704   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:02.077609   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:02.077640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:02.169643   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:02.169666   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:02.169679   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:02.250615   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:02.250657   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:04.803629   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:04.819286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:04.819364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:04.860501   66615 cri.go:89] found id: ""
	I0429 20:10:04.860530   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.860541   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:04.860548   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:04.860672   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:04.898444   66615 cri.go:89] found id: ""
	I0429 20:10:04.898472   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.898480   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:04.898486   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:04.898546   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:04.936569   66615 cri.go:89] found id: ""
	I0429 20:10:04.936599   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.936609   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:04.936617   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:04.936695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:00.556325   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:02.557754   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:05.058245   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:04.551711   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:07.050327   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:03.757096   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:06.257529   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:04.979667   66615 cri.go:89] found id: ""
	I0429 20:10:04.979696   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.979708   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:04.979715   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:04.979768   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:05.019608   66615 cri.go:89] found id: ""
	I0429 20:10:05.019638   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.019650   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:05.019658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:05.019724   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:05.063723   66615 cri.go:89] found id: ""
	I0429 20:10:05.063749   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.063758   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:05.063765   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:05.063821   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:05.106676   66615 cri.go:89] found id: ""
	I0429 20:10:05.106704   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.106714   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:05.106721   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:05.106783   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:05.147652   66615 cri.go:89] found id: ""
	I0429 20:10:05.147683   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.147693   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:05.147704   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:05.147721   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:05.189048   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:05.189085   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:05.248635   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:05.248669   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:05.265791   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:05.265826   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:05.343190   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:05.343217   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:05.343234   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:07.926868   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:07.942581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:07.942656   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:07.981316   66615 cri.go:89] found id: ""
	I0429 20:10:07.981349   66615 logs.go:276] 0 containers: []
	W0429 20:10:07.981361   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:07.981368   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:07.981429   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:08.024017   66615 cri.go:89] found id: ""
	I0429 20:10:08.024045   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.024056   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:08.024062   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:08.024146   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:08.075761   66615 cri.go:89] found id: ""
	I0429 20:10:08.075786   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.075798   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:08.075805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:08.075864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:08.146501   66615 cri.go:89] found id: ""
	I0429 20:10:08.146528   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.146536   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:08.146541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:08.146624   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:08.204987   66615 cri.go:89] found id: ""
	I0429 20:10:08.205013   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.205021   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:08.205027   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:08.205083   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:08.244930   66615 cri.go:89] found id: ""
	I0429 20:10:08.244959   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.244970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:08.244979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:08.245040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:08.284204   66615 cri.go:89] found id: ""
	I0429 20:10:08.284232   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.284243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:08.284250   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:08.284305   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:08.324077   66615 cri.go:89] found id: ""
	I0429 20:10:08.324102   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.324113   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:08.324123   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:08.324139   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:08.341584   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:08.341614   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:08.429808   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:08.429827   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:08.429840   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:08.509906   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:08.509942   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:08.562662   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:08.562697   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:07.557462   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:10.055718   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:09.553108   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:12.050533   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:12.543954   66218 pod_ready.go:81] duration metric: took 4m0.001047967s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" ...
	E0429 20:10:12.543994   66218 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 20:10:12.544032   66218 pod_ready.go:38] duration metric: took 4m6.615064199s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:10:12.544058   66218 kubeadm.go:591] duration metric: took 4m18.60301174s to restartPrimaryControlPlane
	W0429 20:10:12.544116   66218 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:12.544146   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:08.757127   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:10.760764   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:11.121673   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:11.137328   66615 kubeadm.go:591] duration metric: took 4m4.72832668s to restartPrimaryControlPlane
	W0429 20:10:11.137411   66615 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:11.137446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:13.254357   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.116867978s)
	I0429 20:10:13.254436   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:13.275293   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:13.287073   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:13.298046   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:13.298080   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:13.298132   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:13.311790   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:13.311861   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:13.323201   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:13.334284   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:13.334357   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:13.348597   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.361993   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:13.362055   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.376185   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:13.389715   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:13.389778   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:13.403955   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:13.675887   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:10:12.056403   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:14.059895   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:13.257345   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:15.257388   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:17.259138   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:16.557200   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:18.559617   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:19.756708   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:21.757655   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:21.056581   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:23.057477   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:24.256386   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:26.757303   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:25.556902   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:28.055172   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:30.056549   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:29.256790   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:31.757538   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:32.560174   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:35.056286   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:33.758717   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:36.257274   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:37.056603   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:39.557292   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:38.757913   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:40.758857   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:42.056927   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:44.557003   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:44.557038   66875 pod_ready.go:81] duration metric: took 4m0.008018273s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	E0429 20:10:44.557050   66875 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0429 20:10:44.557062   66875 pod_ready.go:38] duration metric: took 4m2.911025288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:10:44.557085   66875 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:10:44.557123   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:44.557191   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:44.620871   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:44.620900   66875 cri.go:89] found id: ""
	I0429 20:10:44.620910   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:44.620970   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.626852   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:44.626919   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:44.673726   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:44.673753   66875 cri.go:89] found id: ""
	I0429 20:10:44.673762   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:44.673827   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.680083   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:44.680157   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:44.724866   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:44.724899   66875 cri.go:89] found id: ""
	I0429 20:10:44.724909   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:44.724976   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.730438   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:44.730492   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:44.785159   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:44.785178   66875 cri.go:89] found id: ""
	I0429 20:10:44.785185   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:44.785230   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.790370   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:44.790432   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:44.839200   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:44.839219   66875 cri.go:89] found id: ""
	I0429 20:10:44.839226   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:44.839277   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.845411   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:44.845490   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:44.907184   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:44.907210   66875 cri.go:89] found id: ""
	I0429 20:10:44.907224   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:44.907281   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.914531   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:44.914596   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:44.957389   66875 cri.go:89] found id: ""
	I0429 20:10:44.957422   66875 logs.go:276] 0 containers: []
	W0429 20:10:44.957430   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:44.957436   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:44.957493   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:45.001760   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:45.001783   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:45.001789   66875 cri.go:89] found id: ""
	I0429 20:10:45.001796   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:45.001845   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:45.007293   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:45.012864   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:45.012886   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:45.406875   66218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.862702626s)
	I0429 20:10:45.406957   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:45.424927   66218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:45.436628   66218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:45.447896   66218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:45.447921   66218 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:45.447970   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:45.458604   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:45.458662   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:45.469701   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:45.479738   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:45.479796   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:45.490097   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:45.500840   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:45.500903   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:45.512918   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:45.524679   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:45.524756   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:45.536044   66218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:45.598481   66218 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:10:45.598556   66218 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:10:45.783162   66218 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:10:45.783321   66218 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:10:45.783481   66218 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:10:46.079842   66218 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:10:46.081981   66218 out.go:204]   - Generating certificates and keys ...
	I0429 20:10:46.082084   66218 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:10:46.082174   66218 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:10:46.082295   66218 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:10:46.082382   66218 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:10:46.082485   66218 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:10:46.082578   66218 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:10:46.082694   66218 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:10:46.082793   66218 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:10:46.082906   66218 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:10:46.082976   66218 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:10:46.083009   66218 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:10:46.083070   66218 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:10:46.242368   66218 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:10:46.667998   66218 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:10:46.832801   66218 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:10:47.033146   66218 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:10:47.265305   66218 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:10:47.266631   66218 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:10:47.271057   66218 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:10:47.273021   66218 out.go:204]   - Booting up control plane ...
	I0429 20:10:47.273128   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:10:47.273245   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:10:47.273333   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:10:47.293530   66218 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:10:47.294487   66218 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:10:47.294564   66218 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:10:47.435669   66218 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:10:47.435802   66218 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:10:43.256983   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:45.257106   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:47.757018   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:45.564197   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:45.564231   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:45.635133   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:45.635168   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:45.779957   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:45.779992   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:45.827796   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:45.827828   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:45.870603   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:45.870636   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:45.935181   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:45.935220   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:46.007476   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:46.007518   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:46.071132   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:46.071169   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:46.130185   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:46.130218   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:46.148649   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:46.148684   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:46.196227   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:46.196266   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:46.245663   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:46.245707   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:48.789522   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:48.810752   66875 api_server.go:72] duration metric: took 4m14.399329979s to wait for apiserver process to appear ...
	I0429 20:10:48.810785   66875 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:10:48.810826   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:48.810921   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:48.868391   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:48.868415   66875 cri.go:89] found id: ""
	I0429 20:10:48.868424   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:48.868490   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.874253   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:48.874329   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:48.934057   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:48.934103   66875 cri.go:89] found id: ""
	I0429 20:10:48.934113   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:48.934173   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.940161   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:48.940244   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:48.992205   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:48.992227   66875 cri.go:89] found id: ""
	I0429 20:10:48.992234   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:48.992297   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.997496   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:48.997568   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:49.038579   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:49.038612   66875 cri.go:89] found id: ""
	I0429 20:10:49.038622   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:49.038683   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.045062   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:49.045129   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:49.084533   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:49.084561   66875 cri.go:89] found id: ""
	I0429 20:10:49.084570   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:49.084628   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.089601   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:49.089680   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:49.133281   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:49.133315   66875 cri.go:89] found id: ""
	I0429 20:10:49.133324   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:49.133387   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.140784   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:49.140889   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:49.201071   66875 cri.go:89] found id: ""
	I0429 20:10:49.201102   66875 logs.go:276] 0 containers: []
	W0429 20:10:49.201112   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:49.201117   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:49.201182   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:49.248708   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:49.248732   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:49.248738   66875 cri.go:89] found id: ""
	I0429 20:10:49.248747   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:49.248807   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.254131   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.259257   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:49.259287   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:49.325386   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:49.325417   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:49.371335   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:49.371365   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:49.414056   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:49.414112   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:49.469457   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:49.469493   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:49.523091   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:49.523123   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:49.581937   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:49.581977   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:49.599704   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:49.599738   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:49.738943   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:49.738984   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:49.814482   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:49.814521   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:50.306035   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:50.306084   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:50.371400   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:50.371485   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:50.426578   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:50.426613   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:48.438095   66218 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002489157s
	I0429 20:10:48.438230   66218 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:10:49.758262   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:52.256578   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:53.941848   66218 kubeadm.go:309] [api-check] The API server is healthy after 5.503491397s
	I0429 20:10:53.961404   66218 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:10:53.979792   66218 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:10:54.018524   66218 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:10:54.018776   66218 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-456788 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:10:54.037050   66218 kubeadm.go:309] [bootstrap-token] Using token: 793n05.pmfi0tdyn7q4x0lt
	I0429 20:10:54.038421   66218 out.go:204]   - Configuring RBAC rules ...
	I0429 20:10:54.038551   66218 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:10:54.045190   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:10:54.054625   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:10:54.060216   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:10:54.068878   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:10:54.073537   66218 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:10:54.355285   66218 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:10:54.800956   66218 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:10:55.352995   66218 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:10:55.353026   66218 kubeadm.go:309] 
	I0429 20:10:55.353135   66218 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:10:55.353158   66218 kubeadm.go:309] 
	I0429 20:10:55.353245   66218 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:10:55.353254   66218 kubeadm.go:309] 
	I0429 20:10:55.353290   66218 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:10:55.353382   66218 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:10:55.353456   66218 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:10:55.353467   66218 kubeadm.go:309] 
	I0429 20:10:55.353564   66218 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:10:55.353578   66218 kubeadm.go:309] 
	I0429 20:10:55.353637   66218 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:10:55.353648   66218 kubeadm.go:309] 
	I0429 20:10:55.353735   66218 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:10:55.353937   66218 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:10:55.354052   66218 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:10:55.354095   66218 kubeadm.go:309] 
	I0429 20:10:55.354216   66218 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:10:55.354334   66218 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:10:55.354348   66218 kubeadm.go:309] 
	I0429 20:10:55.354464   66218 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 793n05.pmfi0tdyn7q4x0lt \
	I0429 20:10:55.354615   66218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 20:10:55.354643   66218 kubeadm.go:309] 	--control-plane 
	I0429 20:10:55.354667   66218 kubeadm.go:309] 
	I0429 20:10:55.354799   66218 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:10:55.354810   66218 kubeadm.go:309] 
	I0429 20:10:55.354943   66218 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 793n05.pmfi0tdyn7q4x0lt \
	I0429 20:10:55.355111   66218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 20:10:55.355493   66218 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:10:55.355513   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:10:55.355520   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:10:55.357341   66218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:10:52.999575   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:10:53.005598   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 200:
	ok
	I0429 20:10:53.006923   66875 api_server.go:141] control plane version: v1.30.0
	I0429 20:10:53.006951   66875 api_server.go:131] duration metric: took 4.196158371s to wait for apiserver health ...
	I0429 20:10:53.006978   66875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:10:53.007011   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:53.007073   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:53.064156   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:53.064186   66875 cri.go:89] found id: ""
	I0429 20:10:53.064196   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:53.064256   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.069282   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:53.069361   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:53.128981   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:53.129016   66875 cri.go:89] found id: ""
	I0429 20:10:53.129025   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:53.129086   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.134680   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:53.134779   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:53.188828   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:53.188857   66875 cri.go:89] found id: ""
	I0429 20:10:53.188869   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:53.188922   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.195332   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:53.195401   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:53.245528   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:53.245548   66875 cri.go:89] found id: ""
	I0429 20:10:53.245556   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:53.245617   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.251849   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:53.251925   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:53.302914   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:53.302941   66875 cri.go:89] found id: ""
	I0429 20:10:53.302950   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:53.303004   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.308072   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:53.308138   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:53.358655   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:53.358684   66875 cri.go:89] found id: ""
	I0429 20:10:53.358693   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:53.358753   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.363796   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:53.363875   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:53.413543   66875 cri.go:89] found id: ""
	I0429 20:10:53.413573   66875 logs.go:276] 0 containers: []
	W0429 20:10:53.413586   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:53.413593   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:53.413651   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:53.457365   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:53.457393   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:53.457399   66875 cri.go:89] found id: ""
	I0429 20:10:53.457409   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:53.457473   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.464321   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.469358   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:53.469377   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:53.605546   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:53.605594   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:53.682788   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:53.682837   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:53.725985   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:53.726017   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:53.775864   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:53.775890   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:53.834762   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:53.834801   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:53.853796   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:53.853830   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:53.915651   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:53.915680   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:53.968857   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:53.968885   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:54.024061   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:54.024090   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:54.079637   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:54.079674   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:54.129296   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:54.129325   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:54.499803   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:54.499861   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:57.070245   66875 system_pods.go:59] 8 kube-system pods found
	I0429 20:10:57.070288   66875 system_pods.go:61] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running
	I0429 20:10:57.070296   66875 system_pods.go:61] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running
	I0429 20:10:57.070302   66875 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running
	I0429 20:10:57.070308   66875 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running
	I0429 20:10:57.070313   66875 system_pods.go:61] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running
	I0429 20:10:57.070318   66875 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running
	I0429 20:10:57.070329   66875 system_pods.go:61] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:10:57.070339   66875 system_pods.go:61] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running
	I0429 20:10:57.070353   66875 system_pods.go:74] duration metric: took 4.063366088s to wait for pod list to return data ...
	I0429 20:10:57.070366   66875 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:10:57.077008   66875 default_sa.go:45] found service account: "default"
	I0429 20:10:57.077031   66875 default_sa.go:55] duration metric: took 6.655489ms for default service account to be created ...
	I0429 20:10:57.077040   66875 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:10:57.087665   66875 system_pods.go:86] 8 kube-system pods found
	I0429 20:10:57.087695   66875 system_pods.go:89] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running
	I0429 20:10:57.087701   66875 system_pods.go:89] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running
	I0429 20:10:57.087707   66875 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running
	I0429 20:10:57.087711   66875 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running
	I0429 20:10:57.087715   66875 system_pods.go:89] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running
	I0429 20:10:57.087719   66875 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running
	I0429 20:10:57.087726   66875 system_pods.go:89] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:10:57.087730   66875 system_pods.go:89] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running
	I0429 20:10:57.087740   66875 system_pods.go:126] duration metric: took 10.694398ms to wait for k8s-apps to be running ...
	I0429 20:10:57.087749   66875 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:10:57.087794   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:57.106878   66875 system_svc.go:56] duration metric: took 19.118595ms WaitForService to wait for kubelet
	I0429 20:10:57.106917   66875 kubeadm.go:576] duration metric: took 4m22.695498557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:10:57.106945   66875 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:10:57.111052   66875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:10:57.111082   66875 node_conditions.go:123] node cpu capacity is 2
	I0429 20:10:57.111096   66875 node_conditions.go:105] duration metric: took 4.144283ms to run NodePressure ...
	I0429 20:10:57.111112   66875 start.go:240] waiting for startup goroutines ...
	I0429 20:10:57.111122   66875 start.go:245] waiting for cluster config update ...
	I0429 20:10:57.111141   66875 start.go:254] writing updated cluster config ...
	I0429 20:10:57.111536   66875 ssh_runner.go:195] Run: rm -f paused
	I0429 20:10:57.169536   66875 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:10:57.172347   66875 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-866143" cluster and "default" namespace by default
	I0429 20:10:55.358683   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:10:55.371397   66218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:10:55.397119   66218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:10:55.397192   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:55.397192   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-456788 minikube.k8s.io/updated_at=2024_04_29T20_10_55_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=no-preload-456788 minikube.k8s.io/primary=true
	I0429 20:10:55.605222   66218 ops.go:34] apiserver oom_adj: -16
	I0429 20:10:55.605588   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:56.106450   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:56.605894   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:57.105657   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:57.605823   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:54.258101   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:56.258336   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:58.106263   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:58.605675   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:59.106483   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:59.605671   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:00.105670   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:00.605695   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:01.106482   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:01.606206   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:02.106534   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:02.606372   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:58.756416   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:00.756875   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:02.756955   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:03.106555   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:03.606298   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.106227   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.606531   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:05.105708   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:05.605735   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:06.106556   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:06.606380   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:07.105690   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:07.605718   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.749964   65980 pod_ready.go:81] duration metric: took 4m0.000195525s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" ...
	E0429 20:11:04.749999   65980 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 20:11:04.750024   65980 pod_ready.go:38] duration metric: took 4m6.211964949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:04.750053   65980 kubeadm.go:591] duration metric: took 4m17.268163648s to restartPrimaryControlPlane
	W0429 20:11:04.750123   65980 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:11:04.750156   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:11:08.106383   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:08.606498   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:08.726533   66218 kubeadm.go:1107] duration metric: took 13.329402445s to wait for elevateKubeSystemPrivileges
	W0429 20:11:08.726584   66218 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:11:08.726596   66218 kubeadm.go:393] duration metric: took 5m14.838913251s to StartCluster
	I0429 20:11:08.726617   66218 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:08.726706   66218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:11:08.729364   66218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:08.730202   66218 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:11:08.731600   66218 out.go:177] * Verifying Kubernetes components...
	I0429 20:11:08.730245   66218 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:11:08.730446   66218 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:11:08.733479   66218 addons.go:69] Setting storage-provisioner=true in profile "no-preload-456788"
	I0429 20:11:08.733509   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:11:08.733518   66218 addons.go:69] Setting default-storageclass=true in profile "no-preload-456788"
	I0429 20:11:08.733540   66218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-456788"
	I0429 20:11:08.733514   66218 addons.go:234] Setting addon storage-provisioner=true in "no-preload-456788"
	W0429 20:11:08.733641   66218 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:11:08.733674   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.733963   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.733988   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.734081   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.734079   66218 addons.go:69] Setting metrics-server=true in profile "no-preload-456788"
	I0429 20:11:08.734106   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.734117   66218 addons.go:234] Setting addon metrics-server=true in "no-preload-456788"
	W0429 20:11:08.734126   66218 addons.go:243] addon metrics-server should already be in state true
	I0429 20:11:08.734154   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.734503   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.734536   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.754451   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0429 20:11:08.754650   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0429 20:11:08.754827   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I0429 20:11:08.755114   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755237   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755332   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755884   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.755905   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756031   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.756048   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.756050   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756062   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756456   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756477   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756513   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756853   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.757231   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.757254   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.757256   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.757291   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.761534   66218 addons.go:234] Setting addon default-storageclass=true in "no-preload-456788"
	W0429 20:11:08.761551   66218 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:11:08.761574   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.761857   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.761894   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.776659   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0429 20:11:08.776838   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0429 20:11:08.777067   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.777462   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.777643   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.777657   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.778152   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.778162   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.778170   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.778371   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.778845   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.778901   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0429 20:11:08.779220   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.779415   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.779446   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.779621   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.779634   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.780051   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.780246   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.780506   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.782432   66218 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:11:08.783809   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:11:08.783825   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:11:08.783843   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.782370   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.786004   66218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:11:08.787488   66218 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:08.787506   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:11:08.787663   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.788245   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.788290   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.788308   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.788381   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.788632   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.788834   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.788985   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:08.791587   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.791964   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.792052   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.792293   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.792477   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.792614   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.792712   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:08.798944   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I0429 20:11:08.799562   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.800224   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.800243   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.800790   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.801008   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.803220   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.803519   66218 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:08.803534   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:11:08.803552   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.806797   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.807216   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.807244   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.807540   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.807986   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.808170   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.808313   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:09.006753   66218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:11:09.038156   66218 node_ready.go:35] waiting up to 6m0s for node "no-preload-456788" to be "Ready" ...
	I0429 20:11:09.051516   66218 node_ready.go:49] node "no-preload-456788" has status "Ready":"True"
	I0429 20:11:09.051545   66218 node_ready.go:38] duration metric: took 13.34705ms for node "no-preload-456788" to be "Ready" ...
	I0429 20:11:09.051557   66218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:09.064032   66218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:09.308339   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:09.308749   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:11:09.308773   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:11:09.309961   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:09.347829   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:11:09.347860   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:11:09.466683   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:11:09.466718   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:11:09.678800   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:11:09.718867   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.718899   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.719248   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.719276   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.719273   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:09.719288   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.719296   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.719553   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.719574   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.719581   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:09.726177   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.726204   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.726527   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.726544   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.726590   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:10.570942   66218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.260944092s)
	I0429 20:11:10.571001   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.571012   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.571480   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.571504   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.571520   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.571528   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.571792   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:10.571818   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.571833   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.912211   66218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.233359134s)
	I0429 20:11:10.912282   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.912298   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.912746   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.912769   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.912779   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.912787   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.913055   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.913108   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.913132   66218 addons.go:470] Verifying addon metrics-server=true in "no-preload-456788"
	I0429 20:11:10.916694   66218 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0429 20:11:10.918273   66218 addons.go:505] duration metric: took 2.188028967s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0429 20:11:11.108067   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.108091   66218 pod_ready.go:81] duration metric: took 2.044032617s for pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.108103   66218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.115163   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.115196   66218 pod_ready.go:81] duration metric: took 7.084503ms for pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.115210   66218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.129264   66218 pod_ready.go:92] pod "etcd-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.129286   66218 pod_ready.go:81] duration metric: took 14.068541ms for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.129297   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.148114   66218 pod_ready.go:92] pod "kube-apiserver-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.148142   66218 pod_ready.go:81] duration metric: took 18.837962ms for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.148155   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.157985   66218 pod_ready.go:92] pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.158006   66218 pod_ready.go:81] duration metric: took 9.844321ms for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.158016   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6m95d" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.469680   66218 pod_ready.go:92] pod "kube-proxy-6m95d" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.469701   66218 pod_ready.go:81] duration metric: took 311.678646ms for pod "kube-proxy-6m95d" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.469710   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.868513   66218 pod_ready.go:92] pod "kube-scheduler-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.868539   66218 pod_ready.go:81] duration metric: took 398.821528ms for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.868550   66218 pod_ready.go:38] duration metric: took 2.816983409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:11.868569   66218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:11:11.868632   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:11:11.885115   66218 api_server.go:72] duration metric: took 3.154873937s to wait for apiserver process to appear ...
	I0429 20:11:11.885146   66218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:11:11.885169   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:11:11.890715   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0429 20:11:11.891649   66218 api_server.go:141] control plane version: v1.30.0
	I0429 20:11:11.891671   66218 api_server.go:131] duration metric: took 6.518818ms to wait for apiserver health ...
	I0429 20:11:11.891679   66218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:11:12.072142   66218 system_pods.go:59] 9 kube-system pods found
	I0429 20:11:12.072175   66218 system_pods.go:61] "coredns-7db6d8ff4d-hcfbq" [c0b53824-478e-4523-ada4-1cd7ba306c81] Running
	I0429 20:11:12.072183   66218 system_pods.go:61] "coredns-7db6d8ff4d-pvhwv" [f38ee7b3-53fe-4609-9b2b-000f55de5d5c] Running
	I0429 20:11:12.072188   66218 system_pods.go:61] "etcd-no-preload-456788" [b0629d4c-643a-485d-aa85-33fe009fff50] Running
	I0429 20:11:12.072194   66218 system_pods.go:61] "kube-apiserver-no-preload-456788" [e56edf5c-9883-4cd9-abab-09902048f584] Running
	I0429 20:11:12.072200   66218 system_pods.go:61] "kube-controller-manager-no-preload-456788" [bfaf44f0-da19-4cec-bec9-d9917cb8a571] Running
	I0429 20:11:12.072205   66218 system_pods.go:61] "kube-proxy-6m95d" [25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7] Running
	I0429 20:11:12.072209   66218 system_pods.go:61] "kube-scheduler-no-preload-456788" [de4f90f7-05d6-4755-a4c0-2c522f7fe88c] Running
	I0429 20:11:12.072217   66218 system_pods.go:61] "metrics-server-569cc877fc-sxgwr" [046d28fe-d51e-43ba-9550-d1d7e33d9d84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:11:12.072224   66218 system_pods.go:61] "storage-provisioner" [fd1c4813-8889-4f21-b21e-6007eaa163a6] Running
	I0429 20:11:12.072247   66218 system_pods.go:74] duration metric: took 180.561509ms to wait for pod list to return data ...
	I0429 20:11:12.072256   66218 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:11:12.267637   66218 default_sa.go:45] found service account: "default"
	I0429 20:11:12.267663   66218 default_sa.go:55] duration metric: took 195.398841ms for default service account to be created ...
	I0429 20:11:12.267677   66218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:11:12.471933   66218 system_pods.go:86] 9 kube-system pods found
	I0429 20:11:12.471967   66218 system_pods.go:89] "coredns-7db6d8ff4d-hcfbq" [c0b53824-478e-4523-ada4-1cd7ba306c81] Running
	I0429 20:11:12.471975   66218 system_pods.go:89] "coredns-7db6d8ff4d-pvhwv" [f38ee7b3-53fe-4609-9b2b-000f55de5d5c] Running
	I0429 20:11:12.471981   66218 system_pods.go:89] "etcd-no-preload-456788" [b0629d4c-643a-485d-aa85-33fe009fff50] Running
	I0429 20:11:12.471987   66218 system_pods.go:89] "kube-apiserver-no-preload-456788" [e56edf5c-9883-4cd9-abab-09902048f584] Running
	I0429 20:11:12.471994   66218 system_pods.go:89] "kube-controller-manager-no-preload-456788" [bfaf44f0-da19-4cec-bec9-d9917cb8a571] Running
	I0429 20:11:12.471999   66218 system_pods.go:89] "kube-proxy-6m95d" [25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7] Running
	I0429 20:11:12.472008   66218 system_pods.go:89] "kube-scheduler-no-preload-456788" [de4f90f7-05d6-4755-a4c0-2c522f7fe88c] Running
	I0429 20:11:12.472020   66218 system_pods.go:89] "metrics-server-569cc877fc-sxgwr" [046d28fe-d51e-43ba-9550-d1d7e33d9d84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:11:12.472027   66218 system_pods.go:89] "storage-provisioner" [fd1c4813-8889-4f21-b21e-6007eaa163a6] Running
	I0429 20:11:12.472039   66218 system_pods.go:126] duration metric: took 204.355515ms to wait for k8s-apps to be running ...
	I0429 20:11:12.472052   66218 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:11:12.472110   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:11:12.487748   66218 system_svc.go:56] duration metric: took 15.68796ms WaitForService to wait for kubelet
	I0429 20:11:12.487779   66218 kubeadm.go:576] duration metric: took 3.757538662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:11:12.487804   66218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:11:12.668597   66218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:11:12.668619   66218 node_conditions.go:123] node cpu capacity is 2
	I0429 20:11:12.668629   66218 node_conditions.go:105] duration metric: took 180.819727ms to run NodePressure ...
	I0429 20:11:12.668640   66218 start.go:240] waiting for startup goroutines ...
	I0429 20:11:12.668646   66218 start.go:245] waiting for cluster config update ...
	I0429 20:11:12.668656   66218 start.go:254] writing updated cluster config ...
	I0429 20:11:12.668905   66218 ssh_runner.go:195] Run: rm -f paused
	I0429 20:11:12.718997   66218 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:11:12.720757   66218 out.go:177] * Done! kubectl is now configured to use "no-preload-456788" cluster and "default" namespace by default
	I0429 20:11:37.819019   65980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.068841912s)
	I0429 20:11:37.819092   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:11:37.836850   65980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:11:37.849684   65980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:11:37.861597   65980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:11:37.861626   65980 kubeadm.go:156] found existing configuration files:
	
	I0429 20:11:37.861674   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:11:37.872799   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:11:37.872860   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:11:37.884336   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:11:37.895124   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:11:37.895181   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:11:37.906874   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:11:37.917482   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:11:37.917530   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:11:37.928137   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:11:37.938698   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:11:37.938750   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:11:37.949658   65980 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:11:38.159358   65980 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:11:46.848042   65980 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:11:46.848108   65980 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:11:46.848169   65980 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:11:46.848308   65980 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:11:46.848447   65980 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:11:46.848531   65980 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:11:46.850368   65980 out.go:204]   - Generating certificates and keys ...
	I0429 20:11:46.850444   65980 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:11:46.850496   65980 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:11:46.850580   65980 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:11:46.850649   65980 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:11:46.850742   65980 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:11:46.850850   65980 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:11:46.850949   65980 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:11:46.851018   65980 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:11:46.851117   65980 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:11:46.851201   65980 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:11:46.851263   65980 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:11:46.851327   65980 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:11:46.851395   65980 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:11:46.851466   65980 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:11:46.851513   65980 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:11:46.851605   65980 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:11:46.851690   65980 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:11:46.851791   65980 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:11:46.851878   65980 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:11:46.853420   65980 out.go:204]   - Booting up control plane ...
	I0429 20:11:46.853526   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:11:46.853617   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:11:46.853696   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:11:46.853791   65980 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:11:46.853866   65980 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:11:46.853900   65980 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:11:46.854010   65980 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:11:46.854094   65980 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:11:46.854148   65980 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.976221ms
	I0429 20:11:46.854240   65980 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:11:46.854311   65980 kubeadm.go:309] [api-check] The API server is healthy after 5.50298765s
	I0429 20:11:46.854407   65980 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:11:46.854509   65980 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:11:46.854565   65980 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:11:46.854726   65980 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-161370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:11:46.854783   65980 kubeadm.go:309] [bootstrap-token] Using token: 93xwhj.zowa67wvl54p1iru
	I0429 20:11:46.856308   65980 out.go:204]   - Configuring RBAC rules ...
	I0429 20:11:46.856452   65980 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:11:46.856561   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:11:46.856736   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:11:46.856867   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:11:46.857018   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:11:46.857140   65980 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:11:46.857294   65980 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:11:46.857358   65980 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:11:46.857419   65980 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:11:46.857428   65980 kubeadm.go:309] 
	I0429 20:11:46.857502   65980 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:11:46.857514   65980 kubeadm.go:309] 
	I0429 20:11:46.857606   65980 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:11:46.857617   65980 kubeadm.go:309] 
	I0429 20:11:46.857649   65980 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:11:46.857725   65980 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:11:46.857797   65980 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:11:46.857806   65980 kubeadm.go:309] 
	I0429 20:11:46.857880   65980 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:11:46.857889   65980 kubeadm.go:309] 
	I0429 20:11:46.857947   65980 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:11:46.857955   65980 kubeadm.go:309] 
	I0429 20:11:46.858020   65980 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:11:46.858125   65980 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:11:46.858216   65980 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:11:46.858224   65980 kubeadm.go:309] 
	I0429 20:11:46.858325   65980 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:11:46.858433   65980 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:11:46.858442   65980 kubeadm.go:309] 
	I0429 20:11:46.858553   65980 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 93xwhj.zowa67wvl54p1iru \
	I0429 20:11:46.858696   65980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 20:11:46.858722   65980 kubeadm.go:309] 	--control-plane 
	I0429 20:11:46.858728   65980 kubeadm.go:309] 
	I0429 20:11:46.858797   65980 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:11:46.858803   65980 kubeadm.go:309] 
	I0429 20:11:46.858881   65980 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 93xwhj.zowa67wvl54p1iru \
	I0429 20:11:46.859014   65980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 20:11:46.859025   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:11:46.859034   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:11:46.861619   65980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:11:46.863111   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:11:46.875965   65980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:11:46.897147   65980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:11:46.897225   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:46.897238   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-161370 minikube.k8s.io/updated_at=2024_04_29T20_11_46_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=embed-certs-161370 minikube.k8s.io/primary=true
	I0429 20:11:46.927555   65980 ops.go:34] apiserver oom_adj: -16
	I0429 20:11:47.119594   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:47.620640   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:48.119974   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:48.620618   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:49.120107   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:49.620349   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:50.120180   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:50.620533   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:51.120332   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:51.620669   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:52.119922   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:52.620467   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:53.120486   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:53.620314   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:54.120159   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:54.620430   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:55.119995   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:55.620496   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:56.120152   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:56.620390   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:57.120090   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:57.619671   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:58.120549   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:58.620334   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.120532   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.619732   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.765502   65980 kubeadm.go:1107] duration metric: took 12.868344365s to wait for elevateKubeSystemPrivileges
	W0429 20:11:59.765550   65980 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:11:59.765561   65980 kubeadm.go:393] duration metric: took 5m12.339650014s to StartCluster
	I0429 20:11:59.765582   65980 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:59.765671   65980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:11:59.767924   65980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:59.768253   65980 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:11:59.769950   65980 out.go:177] * Verifying Kubernetes components...
	I0429 20:11:59.768323   65980 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:11:59.768433   65980 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:11:59.771281   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:11:59.771300   65980 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-161370"
	I0429 20:11:59.771313   65980 addons.go:69] Setting default-storageclass=true in profile "embed-certs-161370"
	I0429 20:11:59.771332   65980 addons.go:69] Setting metrics-server=true in profile "embed-certs-161370"
	I0429 20:11:59.771344   65980 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-161370"
	W0429 20:11:59.771355   65980 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:11:59.771361   65980 addons.go:234] Setting addon metrics-server=true in "embed-certs-161370"
	W0429 20:11:59.771370   65980 addons.go:243] addon metrics-server should already be in state true
	I0429 20:11:59.771399   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.771401   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.771354   65980 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-161370"
	I0429 20:11:59.771757   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771768   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771772   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771783   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.771786   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.771788   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.787359   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0429 20:11:59.787384   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0429 20:11:59.787503   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46153
	I0429 20:11:59.787764   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.787987   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.788069   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.788254   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788273   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.788708   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788724   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.788773   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.788832   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788852   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.789102   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.789117   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.789267   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.789478   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.789510   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.790170   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.790220   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.792108   65980 addons.go:234] Setting addon default-storageclass=true in "embed-certs-161370"
	W0429 20:11:59.792127   65980 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:11:59.792154   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.792386   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.792424   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.808581   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35659
	I0429 20:11:59.808924   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44943
	I0429 20:11:59.808943   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.809461   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.809481   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.809561   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.809791   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.810335   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.810357   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.810976   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.810992   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.811324   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.811604   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0429 20:11:59.811758   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.812141   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.812592   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.812610   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.813130   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.813351   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.813614   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.815589   65980 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:11:59.817004   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:11:59.817014   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:11:59.817027   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.815020   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.818585   65980 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:11:59.820110   65980 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:59.820125   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:11:59.820140   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.819840   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.820305   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.820333   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.820563   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.820722   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.820874   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.820998   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.822849   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.823299   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.823323   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.823460   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.823599   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.823924   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.824039   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.827552   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0429 20:11:59.827976   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.828369   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.828389   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.828754   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.828921   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.830295   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.830566   65980 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:59.830578   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:11:59.830590   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.833174   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.833526   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.833545   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.833759   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.833910   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.834029   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.834166   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.978978   65980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:11:59.995547   65980 node_ready.go:35] waiting up to 6m0s for node "embed-certs-161370" to be "Ready" ...
	I0429 20:12:00.003802   65980 node_ready.go:49] node "embed-certs-161370" has status "Ready":"True"
	I0429 20:12:00.003823   65980 node_ready.go:38] duration metric: took 8.245639ms for node "embed-certs-161370" to be "Ready" ...
	I0429 20:12:00.003833   65980 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:12:00.010487   65980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:00.072627   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:12:00.075716   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:12:00.177043   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:12:00.177069   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:12:00.278082   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:12:00.278112   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:12:00.311731   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:12:00.311756   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:12:00.369982   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:12:00.642840   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.642865   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643084   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.643109   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643227   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.643240   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.643248   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.643256   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643374   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.645085   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645103   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.645112   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.645121   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.645196   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645228   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.645231   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.645331   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645343   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.658929   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.658955   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.659236   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.659267   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.659281   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.103183   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:01.103207   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:01.103488   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:01.103542   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.103557   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:01.103541   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:01.103584   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:01.105440   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:01.105461   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.105473   65980 addons.go:470] Verifying addon metrics-server=true in "embed-certs-161370"
	I0429 20:12:01.107435   65980 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0429 20:12:01.109051   65980 addons.go:505] duration metric: took 1.340729876s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0429 20:12:02.029772   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace has status "Ready":"False"
	I0429 20:12:02.520396   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.520417   65980 pod_ready.go:81] duration metric: took 2.509903724s for pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.520426   65980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.529115   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.529141   65980 pod_ready.go:81] duration metric: took 8.707165ms for pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.529153   65980 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.539459   65980 pod_ready.go:92] pod "etcd-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.539478   65980 pod_ready.go:81] duration metric: took 10.318294ms for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.539489   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.543813   65980 pod_ready.go:92] pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.543830   65980 pod_ready.go:81] duration metric: took 4.333619ms for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.543839   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.549343   65980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.549363   65980 pod_ready.go:81] duration metric: took 5.516323ms for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.549374   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wq48j" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.915209   65980 pod_ready.go:92] pod "kube-proxy-wq48j" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.915232   65980 pod_ready.go:81] duration metric: took 365.851814ms for pod "kube-proxy-wq48j" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.915240   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:03.315564   65980 pod_ready.go:92] pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:03.315587   65980 pod_ready.go:81] duration metric: took 400.340876ms for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:03.315595   65980 pod_ready.go:38] duration metric: took 3.311752591s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:12:03.315609   65980 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:12:03.315655   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:12:03.333491   65980 api_server.go:72] duration metric: took 3.565200855s to wait for apiserver process to appear ...
	I0429 20:12:03.333521   65980 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:12:03.333538   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:12:03.338822   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0429 20:12:03.339975   65980 api_server.go:141] control plane version: v1.30.0
	I0429 20:12:03.339995   65980 api_server.go:131] duration metric: took 6.468233ms to wait for apiserver health ...
	I0429 20:12:03.340002   65980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:12:03.519016   65980 system_pods.go:59] 9 kube-system pods found
	I0429 20:12:03.519042   65980 system_pods.go:61] "coredns-7db6d8ff4d-7z6zv" [422451a2-615d-4bf8-8de8-d5fa5805219f] Running
	I0429 20:12:03.519047   65980 system_pods.go:61] "coredns-7db6d8ff4d-rr6bd" [6d14ff20-6dab-4c02-b91c-0a1e326f1593] Running
	I0429 20:12:03.519050   65980 system_pods.go:61] "etcd-embed-certs-161370" [ab19e79c-18bd-4d0d-b5cf-639453495383] Running
	I0429 20:12:03.519055   65980 system_pods.go:61] "kube-apiserver-embed-certs-161370" [6091dd0a-333d-4729-97db-eb7a30755db4] Running
	I0429 20:12:03.519059   65980 system_pods.go:61] "kube-controller-manager-embed-certs-161370" [de70d57c-9329-4d37-a838-9c9ae1e41871] Running
	I0429 20:12:03.519061   65980 system_pods.go:61] "kube-proxy-wq48j" [3b3b23ef-b5b4-4754-bc44-73e1d51a18d7] Running
	I0429 20:12:03.519065   65980 system_pods.go:61] "kube-scheduler-embed-certs-161370" [c7fd3d36-4e35-43b2-93e7-45129464937d] Running
	I0429 20:12:03.519071   65980 system_pods.go:61] "metrics-server-569cc877fc-x2wb6" [cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:12:03.519075   65980 system_pods.go:61] "storage-provisioner" [93e046a1-3867-44e1-8a4f-cf0eba6dfd6b] Running
	I0429 20:12:03.519082   65980 system_pods.go:74] duration metric: took 179.075384ms to wait for pod list to return data ...
	I0429 20:12:03.519089   65980 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:12:03.714354   65980 default_sa.go:45] found service account: "default"
	I0429 20:12:03.714384   65980 default_sa.go:55] duration metric: took 195.287433ms for default service account to be created ...
	I0429 20:12:03.714395   65980 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:12:03.918729   65980 system_pods.go:86] 9 kube-system pods found
	I0429 20:12:03.918755   65980 system_pods.go:89] "coredns-7db6d8ff4d-7z6zv" [422451a2-615d-4bf8-8de8-d5fa5805219f] Running
	I0429 20:12:03.918760   65980 system_pods.go:89] "coredns-7db6d8ff4d-rr6bd" [6d14ff20-6dab-4c02-b91c-0a1e326f1593] Running
	I0429 20:12:03.918765   65980 system_pods.go:89] "etcd-embed-certs-161370" [ab19e79c-18bd-4d0d-b5cf-639453495383] Running
	I0429 20:12:03.918769   65980 system_pods.go:89] "kube-apiserver-embed-certs-161370" [6091dd0a-333d-4729-97db-eb7a30755db4] Running
	I0429 20:12:03.918773   65980 system_pods.go:89] "kube-controller-manager-embed-certs-161370" [de70d57c-9329-4d37-a838-9c9ae1e41871] Running
	I0429 20:12:03.918777   65980 system_pods.go:89] "kube-proxy-wq48j" [3b3b23ef-b5b4-4754-bc44-73e1d51a18d7] Running
	I0429 20:12:03.918780   65980 system_pods.go:89] "kube-scheduler-embed-certs-161370" [c7fd3d36-4e35-43b2-93e7-45129464937d] Running
	I0429 20:12:03.918787   65980 system_pods.go:89] "metrics-server-569cc877fc-x2wb6" [cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:12:03.918791   65980 system_pods.go:89] "storage-provisioner" [93e046a1-3867-44e1-8a4f-cf0eba6dfd6b] Running
	I0429 20:12:03.918800   65980 system_pods.go:126] duration metric: took 204.399385ms to wait for k8s-apps to be running ...
	I0429 20:12:03.918809   65980 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:12:03.918851   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:03.937870   65980 system_svc.go:56] duration metric: took 19.05503ms WaitForService to wait for kubelet
	I0429 20:12:03.937892   65980 kubeadm.go:576] duration metric: took 4.169607456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:12:03.937910   65980 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:12:04.116479   65980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:12:04.116504   65980 node_conditions.go:123] node cpu capacity is 2
	I0429 20:12:04.116513   65980 node_conditions.go:105] duration metric: took 178.599246ms to run NodePressure ...
	I0429 20:12:04.116524   65980 start.go:240] waiting for startup goroutines ...
	I0429 20:12:04.116530   65980 start.go:245] waiting for cluster config update ...
	I0429 20:12:04.116540   65980 start.go:254] writing updated cluster config ...
	I0429 20:12:04.116799   65980 ssh_runner.go:195] Run: rm -f paused
	I0429 20:12:04.167803   65980 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:12:04.169861   65980 out.go:177] * Done! kubectl is now configured to use "embed-certs-161370" cluster and "default" namespace by default
	I0429 20:12:09.853929   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:12:09.854036   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:12:09.856141   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:12:09.856215   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:12:09.856314   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:12:09.856435   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:12:09.856529   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:12:09.856638   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:12:09.858658   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:12:09.858759   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:12:09.858821   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:12:09.858914   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:12:09.858967   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:12:09.859049   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:12:09.859118   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:12:09.859197   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:12:09.859311   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:12:09.859435   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:12:09.859548   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:12:09.859605   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:12:09.859678   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:12:09.859766   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:12:09.859856   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:12:09.859947   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:12:09.860025   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:12:09.860149   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:12:09.860228   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:12:09.860289   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:12:09.860390   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:12:09.862098   66615 out.go:204]   - Booting up control plane ...
	I0429 20:12:09.862211   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:12:09.862298   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:12:09.862360   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:12:09.862484   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:12:09.862720   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:12:09.862794   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:12:09.862882   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863117   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863244   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863470   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863544   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863814   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863895   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864144   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864223   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864393   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864408   66615 kubeadm.go:309] 
	I0429 20:12:09.864473   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:12:09.864526   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:12:09.864543   66615 kubeadm.go:309] 
	I0429 20:12:09.864589   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:12:09.864638   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:12:09.864779   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:12:09.864789   66615 kubeadm.go:309] 
	I0429 20:12:09.864911   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:12:09.864971   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:12:09.865026   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:12:09.865033   66615 kubeadm.go:309] 
	I0429 20:12:09.865150   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:12:09.865228   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:12:09.865241   66615 kubeadm.go:309] 
	I0429 20:12:09.865404   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:12:09.865538   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:12:09.865651   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:12:09.865755   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:12:09.865828   66615 kubeadm.go:309] 
	W0429 20:12:09.865940   66615 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0429 20:12:09.866027   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:12:10.987703   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.121642991s)
	I0429 20:12:10.987802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:11.007295   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:12:11.020772   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:12:11.020790   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:12:11.020838   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:12:11.033334   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:12:11.033405   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:12:11.044565   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:12:11.057087   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:12:11.057143   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:12:11.069908   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.082866   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:12:11.082920   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.096659   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:12:11.110106   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:12:11.110166   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:12:11.124952   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:12:11.396252   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:14:07.831448   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:14:07.831556   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:14:07.833111   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:14:07.833179   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:14:07.833288   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:14:07.833421   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:14:07.833530   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:14:07.833616   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:14:07.835518   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:14:07.835623   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:14:07.835703   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:14:07.835776   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:14:07.835839   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:14:07.835893   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:14:07.835957   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:14:07.836039   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:14:07.836129   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:14:07.836238   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:14:07.836350   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:14:07.836394   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:14:07.836441   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:14:07.836488   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:14:07.836559   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:14:07.836637   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:14:07.836683   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:14:07.836778   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:14:07.836854   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:14:07.836895   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:14:07.836950   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:14:07.838553   66615 out.go:204]   - Booting up control plane ...
	I0429 20:14:07.838635   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:14:07.838718   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:14:07.838836   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:14:07.838918   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:14:07.839069   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:14:07.839126   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:14:07.839180   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839369   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839450   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839654   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839779   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840008   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840076   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840322   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840380   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840571   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840594   66615 kubeadm.go:309] 
	I0429 20:14:07.840637   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:14:07.840673   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:14:07.840682   66615 kubeadm.go:309] 
	I0429 20:14:07.840715   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:14:07.840745   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:14:07.840844   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:14:07.840857   66615 kubeadm.go:309] 
	I0429 20:14:07.840969   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:14:07.841022   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:14:07.841073   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:14:07.841083   66615 kubeadm.go:309] 
	I0429 20:14:07.841184   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:14:07.841315   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:14:07.841325   66615 kubeadm.go:309] 
	I0429 20:14:07.841454   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:14:07.841550   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:14:07.841632   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:14:07.841697   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:14:07.841760   66615 kubeadm.go:393] duration metric: took 8m1.501853767s to StartCluster
	I0429 20:14:07.841781   66615 kubeadm.go:309] 
	I0429 20:14:07.841800   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:14:07.841853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:14:07.898194   66615 cri.go:89] found id: ""
	I0429 20:14:07.898227   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.898237   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:14:07.898244   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:14:07.898316   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:14:07.938873   66615 cri.go:89] found id: ""
	I0429 20:14:07.938903   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.938914   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:14:07.938921   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:14:07.938979   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:14:07.980523   66615 cri.go:89] found id: ""
	I0429 20:14:07.980551   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.980559   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:14:07.980565   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:14:07.980612   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:14:08.021334   66615 cri.go:89] found id: ""
	I0429 20:14:08.021366   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.021377   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:14:08.021389   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:14:08.021446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:14:08.060598   66615 cri.go:89] found id: ""
	I0429 20:14:08.060636   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.060648   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:14:08.060655   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:14:08.060716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:14:08.101689   66615 cri.go:89] found id: ""
	I0429 20:14:08.101715   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.101723   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:14:08.101729   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:14:08.101786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:14:08.143295   66615 cri.go:89] found id: ""
	I0429 20:14:08.143333   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.143344   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:14:08.143351   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:14:08.143408   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:14:08.190555   66615 cri.go:89] found id: ""
	I0429 20:14:08.190585   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.190597   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:14:08.190609   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:14:08.190624   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:14:08.251830   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:14:08.251870   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:14:08.306512   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:14:08.306554   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:14:08.323258   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:14:08.323283   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:14:08.405539   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:14:08.405568   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:14:08.405583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0429 20:14:08.514288   66615 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0429 20:14:08.514344   66615 out.go:239] * 
	W0429 20:14:08.514431   66615 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.514465   66615 out.go:239] * 
	W0429 20:14:08.515399   66615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:14:08.518578   66615 out.go:177] 
	W0429 20:14:08.519725   66615 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.519782   66615 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0429 20:14:08.519816   66615 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0429 20:14:08.521068   66615 out.go:177] 
	
	
	==> CRI-O <==
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.368230326Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714421999368204036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=082fcf99-02a8-4907-aea1-7ba7ddad5935 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.369038715Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81b725d6-d6d0-4901-b014-713ac6ce7676 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.369158149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81b725d6-d6d0-4901-b014-713ac6ce7676 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.369430596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421222990630090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f1a04bc18d4b5afb60abd8f5cc2c1502fe9b02888477d81d21621cceed451c,PodSandboxId:6a08429e8c4823cbc29bf41bf26f56ab428639313edcad5037de9566d3a6983f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714421202883574425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60422741-64fe-4169-bdbd-384825776aef,},Annotations:map[string]string{io.kubernetes.container.hash: 8545d2cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52,PodSandboxId:e4a8e598d93b3af609a80df6b75698559b2b6e086a04706aec5ad4fbbf311ba8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421199795247389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7m65s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72397559-b0da-492a-be1c-297027021f50,},Annotations:map[string]string{io.kubernetes.container.hash: 51500de8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714421192155164718,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561,PodSandboxId:b8bf49dccc6d886bc7628b38f50835c95ec5329e881e094eac6e5b0fce75b52f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714421192089263486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zddtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47956c-26c1-48e2-8f42-a2a
81d201503,},Annotations:map[string]string{io.kubernetes.container.hash: b9b15c9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0,PodSandboxId:01b4b04f083a312f923e21ae7f5b4c1318fab64fd7f62482c873f8078d56022b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421187521686479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077414c522aee9483d3819d99
7b879c8,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f,PodSandboxId:34abaa6dac5ebedec40d5b604770433edb44465efaee911ec475837813e22cc7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421187485479944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7177a093ebd5743fc5b68cae5a3d2c0,},Annotations:map[string
]string{io.kubernetes.container.hash: cf1ccb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552,PodSandboxId:62b9000d26f2d365735496701ac01757eb9ee92273cb805b8499089443a85493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421187431442344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960e82e54b5cb1fc11c964ee67d686c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a67f4c5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9,PodSandboxId:834f9cbce565cf0a59364cd782b0e4edbe4834a232df6df0aaafdc4bd7130864,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421187359252463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74981adc5b9d59cd235f804f7b09fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81b725d6-d6d0-4901-b014-713ac6ce7676 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.415474197Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f147f2f4-d686-41a0-bc4e-e83988c0d51f name=/runtime.v1.RuntimeService/Version
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.415615933Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f147f2f4-d686-41a0-bc4e-e83988c0d51f name=/runtime.v1.RuntimeService/Version
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.417721634Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08f99c36-20a4-4bd3-918f-87e6c7d6bf0d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.418595960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714421999418570972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08f99c36-20a4-4bd3-918f-87e6c7d6bf0d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.420045919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e40ec2a2-a374-4e02-aa32-939e81ca5a90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.420133708Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e40ec2a2-a374-4e02-aa32-939e81ca5a90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.420319901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421222990630090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f1a04bc18d4b5afb60abd8f5cc2c1502fe9b02888477d81d21621cceed451c,PodSandboxId:6a08429e8c4823cbc29bf41bf26f56ab428639313edcad5037de9566d3a6983f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714421202883574425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60422741-64fe-4169-bdbd-384825776aef,},Annotations:map[string]string{io.kubernetes.container.hash: 8545d2cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52,PodSandboxId:e4a8e598d93b3af609a80df6b75698559b2b6e086a04706aec5ad4fbbf311ba8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421199795247389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7m65s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72397559-b0da-492a-be1c-297027021f50,},Annotations:map[string]string{io.kubernetes.container.hash: 51500de8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714421192155164718,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561,PodSandboxId:b8bf49dccc6d886bc7628b38f50835c95ec5329e881e094eac6e5b0fce75b52f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714421192089263486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zddtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47956c-26c1-48e2-8f42-a2a
81d201503,},Annotations:map[string]string{io.kubernetes.container.hash: b9b15c9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0,PodSandboxId:01b4b04f083a312f923e21ae7f5b4c1318fab64fd7f62482c873f8078d56022b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421187521686479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077414c522aee9483d3819d99
7b879c8,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f,PodSandboxId:34abaa6dac5ebedec40d5b604770433edb44465efaee911ec475837813e22cc7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421187485479944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7177a093ebd5743fc5b68cae5a3d2c0,},Annotations:map[string
]string{io.kubernetes.container.hash: cf1ccb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552,PodSandboxId:62b9000d26f2d365735496701ac01757eb9ee92273cb805b8499089443a85493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421187431442344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960e82e54b5cb1fc11c964ee67d686c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a67f4c5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9,PodSandboxId:834f9cbce565cf0a59364cd782b0e4edbe4834a232df6df0aaafdc4bd7130864,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421187359252463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74981adc5b9d59cd235f804f7b09fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e40ec2a2-a374-4e02-aa32-939e81ca5a90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.466977071Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ff75324-ebf7-4b93-a9ba-5a479e8f30ea name=/runtime.v1.RuntimeService/Version
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.467073959Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ff75324-ebf7-4b93-a9ba-5a479e8f30ea name=/runtime.v1.RuntimeService/Version
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.468555729Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e3e16c3-e4fc-4fff-b9ff-7dc3a99fe411 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.469854937Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714421999469822747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e3e16c3-e4fc-4fff-b9ff-7dc3a99fe411 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.473505693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b317925-bbfa-4a44-a07d-4dcafbe7d14b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.473588528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b317925-bbfa-4a44-a07d-4dcafbe7d14b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.473795255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421222990630090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f1a04bc18d4b5afb60abd8f5cc2c1502fe9b02888477d81d21621cceed451c,PodSandboxId:6a08429e8c4823cbc29bf41bf26f56ab428639313edcad5037de9566d3a6983f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714421202883574425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60422741-64fe-4169-bdbd-384825776aef,},Annotations:map[string]string{io.kubernetes.container.hash: 8545d2cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52,PodSandboxId:e4a8e598d93b3af609a80df6b75698559b2b6e086a04706aec5ad4fbbf311ba8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421199795247389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7m65s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72397559-b0da-492a-be1c-297027021f50,},Annotations:map[string]string{io.kubernetes.container.hash: 51500de8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714421192155164718,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561,PodSandboxId:b8bf49dccc6d886bc7628b38f50835c95ec5329e881e094eac6e5b0fce75b52f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714421192089263486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zddtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47956c-26c1-48e2-8f42-a2a
81d201503,},Annotations:map[string]string{io.kubernetes.container.hash: b9b15c9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0,PodSandboxId:01b4b04f083a312f923e21ae7f5b4c1318fab64fd7f62482c873f8078d56022b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421187521686479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077414c522aee9483d3819d99
7b879c8,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f,PodSandboxId:34abaa6dac5ebedec40d5b604770433edb44465efaee911ec475837813e22cc7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421187485479944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7177a093ebd5743fc5b68cae5a3d2c0,},Annotations:map[string
]string{io.kubernetes.container.hash: cf1ccb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552,PodSandboxId:62b9000d26f2d365735496701ac01757eb9ee92273cb805b8499089443a85493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421187431442344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960e82e54b5cb1fc11c964ee67d686c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a67f4c5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9,PodSandboxId:834f9cbce565cf0a59364cd782b0e4edbe4834a232df6df0aaafdc4bd7130864,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421187359252463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74981adc5b9d59cd235f804f7b09fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b317925-bbfa-4a44-a07d-4dcafbe7d14b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.513560993Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=46607ad8-ec68-4fa8-b79f-e3b2d0cdc0cd name=/runtime.v1.RuntimeService/Version
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.513681277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=46607ad8-ec68-4fa8-b79f-e3b2d0cdc0cd name=/runtime.v1.RuntimeService/Version
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.515458949Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29b4cedb-fc86-4ddc-9ff1-226d07663e40 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.516634158Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714421999516609548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29b4cedb-fc86-4ddc-9ff1-226d07663e40 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.517844130Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a851f83e-c679-48e2-ab80-064d869a204a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.518148275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a851f83e-c679-48e2-ab80-064d869a204a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:19:59 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:19:59.518387352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421222990630090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f1a04bc18d4b5afb60abd8f5cc2c1502fe9b02888477d81d21621cceed451c,PodSandboxId:6a08429e8c4823cbc29bf41bf26f56ab428639313edcad5037de9566d3a6983f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714421202883574425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60422741-64fe-4169-bdbd-384825776aef,},Annotations:map[string]string{io.kubernetes.container.hash: 8545d2cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52,PodSandboxId:e4a8e598d93b3af609a80df6b75698559b2b6e086a04706aec5ad4fbbf311ba8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421199795247389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7m65s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72397559-b0da-492a-be1c-297027021f50,},Annotations:map[string]string{io.kubernetes.container.hash: 51500de8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714421192155164718,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561,PodSandboxId:b8bf49dccc6d886bc7628b38f50835c95ec5329e881e094eac6e5b0fce75b52f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714421192089263486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zddtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47956c-26c1-48e2-8f42-a2a
81d201503,},Annotations:map[string]string{io.kubernetes.container.hash: b9b15c9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0,PodSandboxId:01b4b04f083a312f923e21ae7f5b4c1318fab64fd7f62482c873f8078d56022b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421187521686479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077414c522aee9483d3819d99
7b879c8,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f,PodSandboxId:34abaa6dac5ebedec40d5b604770433edb44465efaee911ec475837813e22cc7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421187485479944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7177a093ebd5743fc5b68cae5a3d2c0,},Annotations:map[string
]string{io.kubernetes.container.hash: cf1ccb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552,PodSandboxId:62b9000d26f2d365735496701ac01757eb9ee92273cb805b8499089443a85493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421187431442344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960e82e54b5cb1fc11c964ee67d686c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a67f4c5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9,PodSandboxId:834f9cbce565cf0a59364cd782b0e4edbe4834a232df6df0aaafdc4bd7130864,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421187359252463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74981adc5b9d59cd235f804f7b09fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a851f83e-c679-48e2-ab80-064d869a204a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55a4d86ba249f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   c91cb288bef7c       storage-provisioner
	b9f1a04bc18d4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   6a08429e8c482       busybox
	ff819232db9ec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   e4a8e598d93b3       coredns-7db6d8ff4d-7m65s
	d235258efef8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   c91cb288bef7c       storage-provisioner
	5291e43ebc5a3       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      13 minutes ago      Running             kube-proxy                1                   b8bf49dccc6d8       kube-proxy-zddtx
	38c3d9d672593       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      13 minutes ago      Running             kube-scheduler            1                   01b4b04f083a3       kube-scheduler-default-k8s-diff-port-866143
	7813548bb1ebb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   34abaa6dac5eb       etcd-default-k8s-diff-port-866143
	40e61b985a70c       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      13 minutes ago      Running             kube-apiserver            1                   62b9000d26f2d       kube-apiserver-default-k8s-diff-port-866143
	453c723fef9ad       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      13 minutes ago      Running             kube-controller-manager   1                   834f9cbce565c       kube-controller-manager-default-k8s-diff-port-866143
	
	
	==> coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49827 - 59453 "HINFO IN 708020101607324385.5107843508713828177. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014125611s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-866143
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-866143
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=default-k8s-diff-port-866143
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T19_59_40_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:59:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-866143
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:19:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:17:15 +0000   Mon, 29 Apr 2024 19:59:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:17:15 +0000   Mon, 29 Apr 2024 19:59:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:17:15 +0000   Mon, 29 Apr 2024 19:59:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:17:15 +0000   Mon, 29 Apr 2024 20:06:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.106
	  Hostname:    default-k8s-diff-port-866143
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a12ab39f5cd241eeaeb7bd76cd5f62dd
	  System UUID:                a12ab39f-5cd2-41ee-aeb7-bd76cd5f62dd
	  Boot ID:                    e2aa995e-fe3a-4c45-a4f2-3707115a5739
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-7db6d8ff4d-7m65s                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-866143                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-866143             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-866143    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-zddtx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-866143             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-569cc877fc-g6gw2                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasSufficientPID
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-866143 status is now: NodeReady
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-866143 event: Registered Node default-k8s-diff-port-866143 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-866143 event: Registered Node default-k8s-diff-port-866143 in Controller
	
	
	==> dmesg <==
	[Apr29 20:06] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063609] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049309] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.117522] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.566583] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.599501] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.366829] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.061114] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067741] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.195585] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.141738] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.349230] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +5.293185] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.066389] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.742309] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +5.633764] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.475734] systemd-fstab-generator[1546]: Ignoring "noauto" option for root device
	[  +3.264132] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.189664] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] <==
	{"level":"info","ts":"2024-04-29T20:06:29.913328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5cae5a320d04b4e8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-29T20:06:29.913447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5cae5a320d04b4e8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-29T20:06:29.913505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5cae5a320d04b4e8 received MsgPreVoteResp from 5cae5a320d04b4e8 at term 2"}
	{"level":"info","ts":"2024-04-29T20:06:29.913539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5cae5a320d04b4e8 became candidate at term 3"}
	{"level":"info","ts":"2024-04-29T20:06:29.913564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5cae5a320d04b4e8 received MsgVoteResp from 5cae5a320d04b4e8 at term 3"}
	{"level":"info","ts":"2024-04-29T20:06:29.913594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5cae5a320d04b4e8 became leader at term 3"}
	{"level":"info","ts":"2024-04-29T20:06:29.913629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5cae5a320d04b4e8 elected leader 5cae5a320d04b4e8 at term 3"}
	{"level":"info","ts":"2024-04-29T20:06:29.957179Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5cae5a320d04b4e8","local-member-attributes":"{Name:default-k8s-diff-port-866143 ClientURLs:[https://192.168.61.106:2379]}","request-path":"/0/members/5cae5a320d04b4e8/attributes","cluster-id":"3342c189df71152b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T20:06:29.957199Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:06:29.957218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:06:29.958022Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T20:06:29.958138Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T20:06:29.959991Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T20:06:29.961197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.106:2379"}
	{"level":"warn","ts":"2024-04-29T20:06:48.526158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.773764ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13035826538271322009 > lease_revoke:<id:34e88f2b7745ae26>","response":"size:27"}
	{"level":"warn","ts":"2024-04-29T20:06:48.526746Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"480.912665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-g6gw2\" ","response":"range_response_count:1 size:4293"}
	{"level":"info","ts":"2024-04-29T20:06:48.526835Z","caller":"traceutil/trace.go:171","msg":"trace[2027377491] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-g6gw2; range_end:; response_count:1; response_revision:599; }","duration":"481.02994ms","start":"2024-04-29T20:06:48.045795Z","end":"2024-04-29T20:06:48.526825Z","steps":["trace[2027377491] 'agreement among raft nodes before linearized reading'  (duration: 480.839111ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:06:48.527032Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:06:48.045782Z","time spent":"481.2314ms","remote":"127.0.0.1:57282","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4315,"request content":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-g6gw2\" "}
	{"level":"info","ts":"2024-04-29T20:06:48.526539Z","caller":"traceutil/trace.go:171","msg":"trace[219925617] linearizableReadLoop","detail":"{readStateIndex:635; appliedIndex:634; }","duration":"480.591493ms","start":"2024-04-29T20:06:48.045817Z","end":"2024-04-29T20:06:48.526408Z","steps":["trace[219925617] 'read index received'  (duration: 254.423469ms)","trace[219925617] 'applied index is now lower than readState.Index'  (duration: 226.166769ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T20:06:48.529373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"433.553334ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:06:48.529465Z","caller":"traceutil/trace.go:171","msg":"trace[2037410542] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:599; }","duration":"433.667157ms","start":"2024-04-29T20:06:48.095787Z","end":"2024-04-29T20:06:48.529454Z","steps":["trace[2037410542] 'agreement among raft nodes before linearized reading'  (duration: 433.551354ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:06:48.529519Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:06:48.095775Z","time spent":"433.736362ms","remote":"127.0.0.1:57070","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-04-29T20:16:30.004559Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":843}
	{"level":"info","ts":"2024-04-29T20:16:30.019157Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":843,"took":"14.180178ms","hash":4288102240,"current-db-size-bytes":2691072,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2691072,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-04-29T20:16:30.019255Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4288102240,"revision":843,"compact-revision":-1}
	
	
	==> kernel <==
	 20:19:59 up 13 min,  0 users,  load average: 0.04, 0.12, 0.09
	Linux default-k8s-diff-port-866143 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] <==
	I0429 20:14:32.451980       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:16:31.452743       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:16:31.453004       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0429 20:16:32.453617       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:16:32.453772       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:16:32.453802       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:16:32.453969       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:16:32.454056       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:16:32.455289       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:17:32.453988       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:17:32.454146       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:17:32.454180       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:17:32.456312       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:17:32.456399       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:17:32.456407       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:19:32.455048       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:19:32.455126       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:19:32.455135       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:19:32.457514       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:19:32.457606       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:19:32.457615       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] <==
	I0429 20:14:15.454770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:14:44.810012       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:14:45.463215       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:15:14.815215       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:15:15.471462       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:15:44.820805       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:15:45.479573       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:16:14.827209       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:16:15.488593       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:16:44.833750       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:16:45.497037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:17:14.839474       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:17:15.506538       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:17:44.844062       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:17:45.526585       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0429 20:17:51.753651       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="208.87µs"
	I0429 20:18:02.759558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="51.919µs"
	E0429 20:18:14.851275       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:18:15.540261       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:18:44.857706       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:18:45.547815       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:19:14.864951       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:19:15.556553       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:19:44.871834       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:19:45.565218       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] <==
	I0429 20:06:32.289757       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:06:32.299308       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.106"]
	I0429 20:06:32.347355       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:06:32.347455       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:06:32.347485       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:06:32.351154       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:06:32.351434       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:06:32.351479       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:06:32.352641       1 config.go:192] "Starting service config controller"
	I0429 20:06:32.352689       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:06:32.352725       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:06:32.352741       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:06:32.354511       1 config.go:319] "Starting node config controller"
	I0429 20:06:32.354554       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:06:32.453604       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 20:06:32.453691       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:06:32.455597       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] <==
	I0429 20:06:29.096365       1 serving.go:380] Generated self-signed cert in-memory
	I0429 20:06:31.565129       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 20:06:31.571023       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:06:31.588739       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 20:06:31.588853       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0429 20:06:31.588939       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0429 20:06:31.588962       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 20:06:31.599329       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 20:06:31.599387       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 20:06:31.599406       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0429 20:06:31.599411       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0429 20:06:31.689161       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0429 20:06:31.700597       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0429 20:06:31.700699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:17:26 default-k8s-diff-port-866143 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:17:37 default-k8s-diff-port-866143 kubelet[939]: E0429 20:17:37.773959     939 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 29 20:17:37 default-k8s-diff-port-866143 kubelet[939]: E0429 20:17:37.774055     939 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 29 20:17:37 default-k8s-diff-port-866143 kubelet[939]: E0429 20:17:37.775051     939 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-542jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-g6gw2_kube-system(7a4b0494-73fb-4444-a8c1-544885a2d873): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 29 20:17:37 default-k8s-diff-port-866143 kubelet[939]: E0429 20:17:37.775118     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:17:51 default-k8s-diff-port-866143 kubelet[939]: E0429 20:17:51.736570     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:18:02 default-k8s-diff-port-866143 kubelet[939]: E0429 20:18:02.740142     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:18:15 default-k8s-diff-port-866143 kubelet[939]: E0429 20:18:15.737502     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:18:26 default-k8s-diff-port-866143 kubelet[939]: E0429 20:18:26.760368     939 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:18:26 default-k8s-diff-port-866143 kubelet[939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:18:26 default-k8s-diff-port-866143 kubelet[939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:18:26 default-k8s-diff-port-866143 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:18:26 default-k8s-diff-port-866143 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:18:30 default-k8s-diff-port-866143 kubelet[939]: E0429 20:18:30.736370     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:18:45 default-k8s-diff-port-866143 kubelet[939]: E0429 20:18:45.735843     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:19:00 default-k8s-diff-port-866143 kubelet[939]: E0429 20:19:00.735445     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:19:11 default-k8s-diff-port-866143 kubelet[939]: E0429 20:19:11.736311     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:19:25 default-k8s-diff-port-866143 kubelet[939]: E0429 20:19:25.735815     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:19:26 default-k8s-diff-port-866143 kubelet[939]: E0429 20:19:26.760390     939 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:19:26 default-k8s-diff-port-866143 kubelet[939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:19:26 default-k8s-diff-port-866143 kubelet[939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:19:26 default-k8s-diff-port-866143 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:19:26 default-k8s-diff-port-866143 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:19:37 default-k8s-diff-port-866143 kubelet[939]: E0429 20:19:37.735969     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:19:52 default-k8s-diff-port-866143 kubelet[939]: E0429 20:19:52.736063     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	
	
	==> storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] <==
	I0429 20:07:03.128071       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 20:07:03.140310       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 20:07:03.140368       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 20:07:20.549646       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 20:07:20.550253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-866143_c7b182aa-9dc5-483a-a251-942834c1c696!
	I0429 20:07:20.552164       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c09addf-7050-4b36-b55d-ddcd2ef1ab98", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-866143_c7b182aa-9dc5-483a-a251-942834c1c696 became leader
	I0429 20:07:20.651082       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-866143_c7b182aa-9dc5-483a-a251-942834c1c696!
	
	
	==> storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] <==
	I0429 20:06:32.251851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0429 20:07:02.256515       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-866143 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-g6gw2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-866143 describe pod metrics-server-569cc877fc-g6gw2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-866143 describe pod metrics-server-569cc877fc-g6gw2: exit status 1 (62.298927ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-g6gw2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-866143 describe pod metrics-server-569cc877fc-g6gw2: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-456788 -n no-preload-456788
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-29 20:20:13.323863872 +0000 UTC m=+6062.971239005
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456788 -n no-preload-456788
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-456788 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-456788 logs -n 25: (2.286160739s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:55 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| ssh     | cert-options-437743 ssh                                | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-437743 -- sudo                         | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-437743                                 | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-161370            | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-456788             | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-193781 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | disable-driver-mounts-193781                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 20:00 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-866143  | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-161370                 | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-919612        | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-456788                  | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:01 UTC | 29 Apr 24 20:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-919612             | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-866143       | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:10 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:02:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:02:45.502823   66875 out.go:291] Setting OutFile to fd 1 ...
	I0429 20:02:45.503073   66875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:45.503084   66875 out.go:304] Setting ErrFile to fd 2...
	I0429 20:02:45.503089   66875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:45.503272   66875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 20:02:45.503808   66875 out.go:298] Setting JSON to false
	I0429 20:02:45.504681   66875 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6263,"bootTime":1714414702,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 20:02:45.504736   66875 start.go:139] virtualization: kvm guest
	I0429 20:02:45.507344   66875 out.go:177] * [default-k8s-diff-port-866143] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 20:02:45.508715   66875 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:02:45.508745   66875 notify.go:220] Checking for updates...
	I0429 20:02:45.510093   66875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:02:45.512200   66875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:02:45.513622   66875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:02:45.514915   66875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 20:02:45.516228   66875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:02:45.517923   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:02:45.518366   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:45.518446   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:45.533484   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I0429 20:02:45.533901   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:45.534427   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:02:45.534448   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:45.534822   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:45.535013   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:02:45.535292   66875 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:02:45.535595   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:45.535639   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:45.551065   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0429 20:02:45.551469   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:45.551906   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:02:45.551928   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:45.552239   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:45.552451   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:02:45.584714   66875 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 20:02:45.586089   66875 start.go:297] selected driver: kvm2
	I0429 20:02:45.586117   66875 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:45.586250   66875 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:02:45.587043   66875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:45.587136   66875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 20:02:45.601799   66875 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 20:02:45.602171   66875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:02:45.602246   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:02:45.602265   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:02:45.602323   66875 start.go:340] cluster config:
	{Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:45.602444   66875 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:45.605081   66875 out.go:177] * Starting "default-k8s-diff-port-866143" primary control-plane node in "default-k8s-diff-port-866143" cluster
	I0429 20:02:42.794291   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:45.866333   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:45.606536   66875 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:02:45.606590   66875 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 20:02:45.606602   66875 cache.go:56] Caching tarball of preloaded images
	I0429 20:02:45.606687   66875 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 20:02:45.606704   66875 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 20:02:45.606799   66875 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:02:45.606986   66875 start.go:360] acquireMachinesLock for default-k8s-diff-port-866143: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:02:51.946332   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:55.018269   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:01.098329   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:04.170389   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:10.250316   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:13.322292   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:19.402290   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:22.474356   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:28.554348   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:31.626416   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:37.706282   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:40.778321   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:46.858318   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:49.930321   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:56.010331   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:59.082336   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:05.162299   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:08.234328   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:14.314352   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:17.386337   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:23.466350   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:26.538284   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:32.618297   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:35.690319   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:41.770372   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:44.842280   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:50.922320   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:53.994336   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:00.074389   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:03.146353   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:09.226369   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:12.298407   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:15.302828   66218 start.go:364] duration metric: took 4m7.483402316s to acquireMachinesLock for "no-preload-456788"
	I0429 20:05:15.302889   66218 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:15.302896   66218 fix.go:54] fixHost starting: 
	I0429 20:05:15.303301   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:15.303337   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:15.319582   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I0429 20:05:15.320057   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:15.320597   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:05:15.320620   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:15.321017   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:15.321272   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:15.321472   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:05:15.323137   66218 fix.go:112] recreateIfNeeded on no-preload-456788: state=Stopped err=<nil>
	I0429 20:05:15.323171   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	W0429 20:05:15.323346   66218 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:15.325520   66218 out.go:177] * Restarting existing kvm2 VM for "no-preload-456788" ...
	I0429 20:05:15.327122   66218 main.go:141] libmachine: (no-preload-456788) Calling .Start
	I0429 20:05:15.327314   66218 main.go:141] libmachine: (no-preload-456788) Ensuring networks are active...
	I0429 20:05:15.328136   66218 main.go:141] libmachine: (no-preload-456788) Ensuring network default is active
	I0429 20:05:15.328437   66218 main.go:141] libmachine: (no-preload-456788) Ensuring network mk-no-preload-456788 is active
	I0429 20:05:15.328771   66218 main.go:141] libmachine: (no-preload-456788) Getting domain xml...
	I0429 20:05:15.329442   66218 main.go:141] libmachine: (no-preload-456788) Creating domain...
	I0429 20:05:16.534970   66218 main.go:141] libmachine: (no-preload-456788) Waiting to get IP...
	I0429 20:05:16.536019   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:16.536375   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:16.536444   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:16.536369   67416 retry.go:31] will retry after 240.743093ms: waiting for machine to come up
	I0429 20:05:16.779123   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:16.779623   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:16.779659   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:16.779558   67416 retry.go:31] will retry after 355.595109ms: waiting for machine to come up
	I0429 20:05:17.137145   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:17.137512   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:17.137542   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:17.137480   67416 retry.go:31] will retry after 347.905643ms: waiting for machine to come up
	I0429 20:05:17.487174   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:17.487566   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:17.487597   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:17.487543   67416 retry.go:31] will retry after 547.016094ms: waiting for machine to come up
	I0429 20:05:15.300221   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:15.300278   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:05:15.300613   65980 buildroot.go:166] provisioning hostname "embed-certs-161370"
	I0429 20:05:15.300652   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:05:15.300910   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:05:15.302677   65980 machine.go:97] duration metric: took 4m37.41104152s to provisionDockerMachine
	I0429 20:05:15.302722   65980 fix.go:56] duration metric: took 4m37.432092484s for fixHost
	I0429 20:05:15.302728   65980 start.go:83] releasing machines lock for "embed-certs-161370", held for 4m37.432113341s
	W0429 20:05:15.302753   65980 start.go:713] error starting host: provision: host is not running
	W0429 20:05:15.302871   65980 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0429 20:05:15.302882   65980 start.go:728] Will try again in 5 seconds ...
	I0429 20:05:18.036617   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:18.037042   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:18.037104   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:18.037025   67416 retry.go:31] will retry after 465.100134ms: waiting for machine to come up
	I0429 20:05:18.503846   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:18.504326   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:18.504352   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:18.504283   67416 retry.go:31] will retry after 672.007195ms: waiting for machine to come up
	I0429 20:05:19.178173   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:19.178570   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:19.178604   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:19.178516   67416 retry.go:31] will retry after 744.052058ms: waiting for machine to come up
	I0429 20:05:19.924561   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:19.925029   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:19.925060   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:19.925002   67416 retry.go:31] will retry after 1.06511003s: waiting for machine to come up
	I0429 20:05:20.991584   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:20.992015   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:20.992046   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:20.991980   67416 retry.go:31] will retry after 1.677065765s: waiting for machine to come up
	I0429 20:05:22.671760   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:22.672123   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:22.672149   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:22.672085   67416 retry.go:31] will retry after 1.979191189s: waiting for machine to come up
	I0429 20:05:20.303964   65980 start.go:360] acquireMachinesLock for embed-certs-161370: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:05:24.654246   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:24.654711   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:24.654735   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:24.654663   67416 retry.go:31] will retry after 1.839551716s: waiting for machine to come up
	I0429 20:05:26.496511   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:26.496982   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:26.497017   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:26.496939   67416 retry.go:31] will retry after 3.505979368s: waiting for machine to come up
	I0429 20:05:30.006590   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:30.006916   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:30.006951   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:30.006871   67416 retry.go:31] will retry after 3.811785899s: waiting for machine to come up
	I0429 20:05:35.155600   66615 start.go:364] duration metric: took 3m25.093405289s to acquireMachinesLock for "old-k8s-version-919612"
	I0429 20:05:35.155655   66615 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:35.155661   66615 fix.go:54] fixHost starting: 
	I0429 20:05:35.155999   66615 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:35.156034   66615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:35.173332   66615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0429 20:05:35.173754   66615 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:35.174261   66615 main.go:141] libmachine: Using API Version  1
	I0429 20:05:35.174294   66615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:35.174602   66615 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:35.174797   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:35.174987   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetState
	I0429 20:05:35.176453   66615 fix.go:112] recreateIfNeeded on old-k8s-version-919612: state=Stopped err=<nil>
	I0429 20:05:35.176478   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	W0429 20:05:35.176647   66615 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:35.178966   66615 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-919612" ...
	I0429 20:05:33.823293   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.823787   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has current primary IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.823806   66218 main.go:141] libmachine: (no-preload-456788) Found IP for machine: 192.168.39.235
	I0429 20:05:33.823830   66218 main.go:141] libmachine: (no-preload-456788) Reserving static IP address...
	I0429 20:05:33.824243   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "no-preload-456788", mac: "52:54:00:15:ae:18", ip: "192.168.39.235"} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.824279   66218 main.go:141] libmachine: (no-preload-456788) DBG | skip adding static IP to network mk-no-preload-456788 - found existing host DHCP lease matching {name: "no-preload-456788", mac: "52:54:00:15:ae:18", ip: "192.168.39.235"}
	I0429 20:05:33.824293   66218 main.go:141] libmachine: (no-preload-456788) Reserved static IP address: 192.168.39.235
	I0429 20:05:33.824308   66218 main.go:141] libmachine: (no-preload-456788) Waiting for SSH to be available...
	I0429 20:05:33.824323   66218 main.go:141] libmachine: (no-preload-456788) DBG | Getting to WaitForSSH function...
	I0429 20:05:33.826371   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.826678   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.826711   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.826808   66218 main.go:141] libmachine: (no-preload-456788) DBG | Using SSH client type: external
	I0429 20:05:33.826836   66218 main.go:141] libmachine: (no-preload-456788) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa (-rw-------)
	I0429 20:05:33.826863   66218 main.go:141] libmachine: (no-preload-456788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:33.826876   66218 main.go:141] libmachine: (no-preload-456788) DBG | About to run SSH command:
	I0429 20:05:33.826887   66218 main.go:141] libmachine: (no-preload-456788) DBG | exit 0
	I0429 20:05:33.954275   66218 main.go:141] libmachine: (no-preload-456788) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:33.954631   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetConfigRaw
	I0429 20:05:33.955387   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:33.957827   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.958210   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.958241   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.958510   66218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/config.json ...
	I0429 20:05:33.958707   66218 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:33.958726   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:33.958952   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:33.961236   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.961535   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.961564   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.961692   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:33.961857   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:33.962015   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:33.962163   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:33.962339   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:33.962522   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:33.962533   66218 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:34.070746   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:34.070777   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.071037   66218 buildroot.go:166] provisioning hostname "no-preload-456788"
	I0429 20:05:34.071062   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.071305   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.073680   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.074016   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.074043   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.074203   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.074374   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.074513   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.074612   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.074743   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.074946   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.074960   66218 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-456788 && echo "no-preload-456788" | sudo tee /etc/hostname
	I0429 20:05:34.198256   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-456788
	
	I0429 20:05:34.198286   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.201126   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.201482   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.201521   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.201710   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.201914   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.202055   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.202219   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.202361   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.202549   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.202573   66218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-456788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-456788/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-456788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:34.324678   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:34.324710   66218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:34.324732   66218 buildroot.go:174] setting up certificates
	I0429 20:05:34.324744   66218 provision.go:84] configureAuth start
	I0429 20:05:34.324756   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.325032   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:34.327623   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.328010   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.328040   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.328149   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.330359   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.330679   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.330711   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.330811   66218 provision.go:143] copyHostCerts
	I0429 20:05:34.330865   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:34.330878   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:34.330939   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:34.331023   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:34.331031   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:34.331054   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:34.331111   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:34.331119   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:34.331148   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:34.331231   66218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.no-preload-456788 san=[127.0.0.1 192.168.39.235 localhost minikube no-preload-456788]
	I0429 20:05:34.444358   66218 provision.go:177] copyRemoteCerts
	I0429 20:05:34.444420   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:34.444445   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.447129   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.447432   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.447466   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.447623   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.447833   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.447999   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.448129   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:34.533465   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:34.561724   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:34.589229   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 20:05:34.617451   66218 provision.go:87] duration metric: took 292.691614ms to configureAuth
	I0429 20:05:34.617491   66218 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:34.617733   66218 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:05:34.617821   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.620628   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.621016   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.621047   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.621257   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.621532   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.621718   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.621892   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.622085   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.622289   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.622305   66218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:34.908031   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:34.908064   66218 machine.go:97] duration metric: took 949.343369ms to provisionDockerMachine
	I0429 20:05:34.908077   66218 start.go:293] postStartSetup for "no-preload-456788" (driver="kvm2")
	I0429 20:05:34.908091   66218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:34.908107   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:34.908452   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:34.908489   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.911574   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.912026   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.912054   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.912219   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.912428   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.912616   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.912743   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:34.997625   66218 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:35.002661   66218 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:35.002687   66218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:35.002753   66218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:35.002822   66218 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:35.002906   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:35.013292   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:35.039830   66218 start.go:296] duration metric: took 131.741312ms for postStartSetup
	I0429 20:05:35.039865   66218 fix.go:56] duration metric: took 19.736969384s for fixHost
	I0429 20:05:35.039905   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.042526   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.042877   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.042912   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.043032   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.043239   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.043416   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.043534   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.043696   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:35.043848   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:35.043858   66218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:05:35.155463   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421135.123583649
	
	I0429 20:05:35.155485   66218 fix.go:216] guest clock: 1714421135.123583649
	I0429 20:05:35.155496   66218 fix.go:229] Guest: 2024-04-29 20:05:35.123583649 +0000 UTC Remote: 2024-04-29 20:05:35.039869068 +0000 UTC m=+267.371683880 (delta=83.714581ms)
	I0429 20:05:35.155514   66218 fix.go:200] guest clock delta is within tolerance: 83.714581ms
	I0429 20:05:35.155519   66218 start.go:83] releasing machines lock for "no-preload-456788", held for 19.852645936s
	I0429 20:05:35.155544   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.155881   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:35.158682   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.159051   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.159070   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.159205   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.159793   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.159987   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.160077   66218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:35.160117   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.160216   66218 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:35.160244   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.162788   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163016   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163226   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.163250   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163372   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.163449   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.163475   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163537   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.163621   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.163723   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.163791   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.163873   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:35.163920   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.164064   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:35.248518   66218 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:35.271479   66218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:35.423324   66218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:35.430371   66218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:35.430445   66218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:35.447860   66218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:35.447886   66218 start.go:494] detecting cgroup driver to use...
	I0429 20:05:35.447949   66218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:35.464102   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:35.479069   66218 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:35.479158   66218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:35.493800   66218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:35.509284   66218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:35.627273   66218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:35.785213   66218 docker.go:233] disabling docker service ...
	I0429 20:05:35.785300   66218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:35.803584   66218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:35.818874   66218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:35.984309   66218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:36.128841   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:36.148237   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:36.172144   66218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:05:36.172243   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.191274   66218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:36.191353   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.209656   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.224474   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.238802   66218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:36.252515   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.264522   66218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.286496   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.299127   66218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:36.310702   66218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:36.310760   66218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:36.336226   66218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:36.348617   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:36.474875   66218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:36.619181   66218 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:36.619257   66218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:36.625401   66218 start.go:562] Will wait 60s for crictl version
	I0429 20:05:36.625475   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:36.630232   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:36.667005   66218 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:36.667093   66218 ssh_runner.go:195] Run: crio --version
	I0429 20:05:36.699758   66218 ssh_runner.go:195] Run: crio --version
	I0429 20:05:36.734406   66218 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:05:36.735853   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:36.738683   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:36.739019   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:36.739049   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:36.739310   66218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:36.745227   66218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:36.760124   66218 kubeadm.go:877] updating cluster {Name:no-preload-456788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:36.760238   66218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:05:36.760278   66218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:36.801389   66218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:05:36.801414   66218 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:05:36.801470   66218 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:36.801508   66218 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:36.801524   66218 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:36.801559   66218 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:36.801580   66218 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.801632   66218 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0429 20:05:36.801687   66218 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:36.801688   66218 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:36.803301   66218 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:36.803300   66218 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.803308   66218 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:36.803382   66218 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:36.956976   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.964957   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.022376   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.025860   66218 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0429 20:05:37.025893   66218 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0429 20:05:37.025915   66218 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.025924   66218 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:37.025962   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.025964   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.072629   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.072688   66218 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0429 20:05:37.072713   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:37.072741   66218 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.072791   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.118610   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0429 20:05:37.118704   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.118720   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.128364   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0429 20:05:37.128474   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:37.161350   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0429 20:05:37.165670   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0429 20:05:37.165693   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0429 20:05:37.165710   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.165754   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.165762   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0429 20:05:37.165779   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:37.167440   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:37.174173   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:37.180560   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:37.715733   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:35.180393   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .Start
	I0429 20:05:35.180576   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring networks are active...
	I0429 20:05:35.181281   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network default is active
	I0429 20:05:35.181678   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network mk-old-k8s-version-919612 is active
	I0429 20:05:35.182102   66615 main.go:141] libmachine: (old-k8s-version-919612) Getting domain xml...
	I0429 20:05:35.182867   66615 main.go:141] libmachine: (old-k8s-version-919612) Creating domain...
	I0429 20:05:36.459478   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting to get IP...
	I0429 20:05:36.460301   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.460751   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.460817   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.460706   67552 retry.go:31] will retry after 280.48781ms: waiting for machine to come up
	I0429 20:05:36.743188   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.743630   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.743658   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.743591   67552 retry.go:31] will retry after 326.238132ms: waiting for machine to come up
	I0429 20:05:37.071146   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.071576   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.071609   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.071527   67552 retry.go:31] will retry after 380.72234ms: waiting for machine to come up
	I0429 20:05:37.453967   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.454435   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.454464   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.454385   67552 retry.go:31] will retry after 593.303053ms: waiting for machine to come up
	I0429 20:05:38.049072   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.049555   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.049587   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.049500   67552 retry.go:31] will retry after 694.752524ms: waiting for machine to come up
	I0429 20:05:38.746542   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.747034   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.747065   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.747002   67552 retry.go:31] will retry after 860.161186ms: waiting for machine to come up
	I0429 20:05:39.609098   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:39.609601   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:39.609634   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:39.609544   67552 retry.go:31] will retry after 726.889681ms: waiting for machine to come up
	I0429 20:05:39.327634   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.161845487s)
	I0429 20:05:39.327673   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.161870572s)
	I0429 20:05:39.327710   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0429 20:05:39.327675   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0429 20:05:39.327737   66218 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:39.327748   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0: (2.16027023s)
	I0429 20:05:39.327805   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:39.327811   66218 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0429 20:05:39.327821   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0: (2.153617598s)
	I0429 20:05:39.327846   66218 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:39.327878   66218 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0429 20:05:39.327891   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (2.147303278s)
	I0429 20:05:39.327910   66218 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:39.327929   66218 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0429 20:05:39.327944   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327954   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.612190652s)
	I0429 20:05:39.327960   66218 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:39.327984   66218 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0429 20:05:39.328035   66218 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:39.328061   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327991   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327886   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.333555   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:39.343257   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:41.263038   66218 ssh_runner.go:235] Completed: which crictl: (1.934889703s)
	I0429 20:05:41.263103   66218 ssh_runner.go:235] Completed: which crictl: (1.93491368s)
	I0429 20:05:41.263121   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:41.263132   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.935299869s)
	I0429 20:05:41.263153   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0: (1.929577799s)
	I0429 20:05:41.263155   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0429 20:05:41.263217   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.919934007s)
	I0429 20:05:41.263221   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0429 20:05:41.263248   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:41.263251   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 20:05:41.263290   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:41.263301   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:41.263343   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:41.263159   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:40.338292   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:40.338823   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:40.338864   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:40.338757   67552 retry.go:31] will retry after 1.310400969s: waiting for machine to come up
	I0429 20:05:41.651107   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:41.651625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:41.651670   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:41.651575   67552 retry.go:31] will retry after 1.769756679s: waiting for machine to come up
	I0429 20:05:43.423326   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:43.423829   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:43.423869   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:43.423790   67552 retry.go:31] will retry after 1.748237944s: waiting for machine to come up
	I0429 20:05:44.084051   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.820737476s)
	I0429 20:05:44.084139   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.820774517s)
	I0429 20:05:44.084167   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.820842646s)
	I0429 20:05:44.084186   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0429 20:05:44.084142   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0429 20:05:44.084202   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0429 20:05:44.084211   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:44.084065   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0: (2.820919138s)
	I0429 20:05:44.084244   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0429 20:05:44.084260   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:44.084272   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (2.82086612s)
	I0429 20:05:44.084305   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0429 20:05:44.084331   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:44.084375   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:44.091151   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0429 20:05:46.553783   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.469493694s)
	I0429 20:05:46.553882   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0429 20:05:46.553912   66218 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:46.553837   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.469479626s)
	I0429 20:05:46.553973   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:46.553975   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0429 20:05:47.510118   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0429 20:05:47.510169   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:47.510212   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:45.173157   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:45.173617   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:45.173642   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:45.173563   67552 retry.go:31] will retry after 2.784243469s: waiting for machine to come up
	I0429 20:05:47.959942   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:47.960473   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:47.960508   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:47.960410   67552 retry.go:31] will retry after 3.046526969s: waiting for machine to come up
	I0429 20:05:49.069163   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.55892426s)
	I0429 20:05:49.069202   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0429 20:05:49.069231   66218 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:49.069276   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:51.007941   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:51.008230   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:51.008253   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:51.008213   67552 retry.go:31] will retry after 4.220985004s: waiting for machine to come up
	I0429 20:05:56.579154   66875 start.go:364] duration metric: took 3m10.972135355s to acquireMachinesLock for "default-k8s-diff-port-866143"
	I0429 20:05:56.579208   66875 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:56.579230   66875 fix.go:54] fixHost starting: 
	I0429 20:05:56.579615   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:56.579655   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:56.599113   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0429 20:05:56.599627   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:56.600173   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:05:56.600198   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:56.600488   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:56.600694   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:05:56.600849   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:05:56.602291   66875 fix.go:112] recreateIfNeeded on default-k8s-diff-port-866143: state=Stopped err=<nil>
	I0429 20:05:56.602315   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	W0429 20:05:56.602456   66875 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:56.605006   66875 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-866143" ...
	I0429 20:05:53.062693   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.993382111s)
	I0429 20:05:53.062730   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0429 20:05:53.062757   66218 cache_images.go:123] Successfully loaded all cached images
	I0429 20:05:53.062762   66218 cache_images.go:92] duration metric: took 16.261337424s to LoadCachedImages
	I0429 20:05:53.062770   66218 kubeadm.go:928] updating node { 192.168.39.235 8443 v1.30.0 crio true true} ...
	I0429 20:05:53.062893   66218 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-456788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:05:53.062994   66218 ssh_runner.go:195] Run: crio config
	I0429 20:05:53.116289   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:05:53.116311   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:05:53.116322   66218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:05:53.116340   66218 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.235 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-456788 NodeName:no-preload-456788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:05:53.116516   66218 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-456788"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:05:53.116592   66218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:05:53.128095   66218 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:05:53.128174   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:05:53.138786   66218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0429 20:05:53.158151   66218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:05:53.176440   66218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:05:53.195348   66218 ssh_runner.go:195] Run: grep 192.168.39.235	control-plane.minikube.internal$ /etc/hosts
	I0429 20:05:53.199408   66218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:53.212407   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:53.349752   66218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:05:53.368381   66218 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788 for IP: 192.168.39.235
	I0429 20:05:53.368401   66218 certs.go:194] generating shared ca certs ...
	I0429 20:05:53.368415   66218 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:05:53.368565   66218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:05:53.368609   66218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:05:53.368619   66218 certs.go:256] generating profile certs ...
	I0429 20:05:53.368697   66218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.key
	I0429 20:05:53.368751   66218 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.key.5f45c78c
	I0429 20:05:53.368785   66218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.key
	I0429 20:05:53.368889   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:05:53.368915   66218 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:05:53.368921   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:05:53.368944   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:05:53.368972   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:05:53.368993   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:05:53.369029   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:53.369624   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:05:53.428403   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:05:53.467050   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:05:53.501319   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:05:53.528828   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:05:53.553742   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:05:53.582308   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:05:53.609324   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:05:53.636730   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:05:53.663388   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:05:53.690949   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:05:53.717113   66218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:05:53.735784   66218 ssh_runner.go:195] Run: openssl version
	I0429 20:05:53.741879   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:05:53.752930   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.757811   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.757861   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.763798   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:05:53.775019   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:05:53.786654   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.791457   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.791500   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.797608   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:05:53.809139   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:05:53.820927   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.826384   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.826441   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.832798   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:05:53.844300   66218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:05:53.849139   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:05:53.855556   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:05:53.861716   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:05:53.868390   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:05:53.874740   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:05:53.881101   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:05:53.887688   66218 kubeadm.go:391] StartCluster: {Name:no-preload-456788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:05:53.887807   66218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:05:53.887858   66218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:05:53.930491   66218 cri.go:89] found id: ""
	I0429 20:05:53.930563   66218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:05:53.941016   66218 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:05:53.941037   66218 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:05:53.941042   66218 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:05:53.941081   66218 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:05:53.950651   66218 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:05:53.951536   66218 kubeconfig.go:125] found "no-preload-456788" server: "https://192.168.39.235:8443"
	I0429 20:05:53.953451   66218 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:05:53.962857   66218 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.235
	I0429 20:05:53.962879   66218 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:05:53.962889   66218 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:05:53.962932   66218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:05:54.000841   66218 cri.go:89] found id: ""
	I0429 20:05:54.000909   66218 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:05:54.018221   66218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:05:54.028524   66218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:05:54.028556   66218 kubeadm.go:156] found existing configuration files:
	
	I0429 20:05:54.028600   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:05:54.038717   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:05:54.038807   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:05:54.049350   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:05:54.059483   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:05:54.059548   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:05:54.069518   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:05:54.078900   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:05:54.078953   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:05:54.088652   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:05:54.098545   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:05:54.098596   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:05:54.108351   66218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:05:54.118645   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:54.236330   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:55.859211   66218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.622843221s)
	I0429 20:05:55.859254   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.075993   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.175176   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.274249   66218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:05:56.274469   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:56.775315   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:57.274840   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:57.315656   66218 api_server.go:72] duration metric: took 1.041421989s to wait for apiserver process to appear ...
	I0429 20:05:57.315697   66218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:05:57.315719   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:05:57.316669   66218 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": dial tcp 192.168.39.235:8443: connect: connection refused
	I0429 20:05:55.230409   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230860   66615 main.go:141] libmachine: (old-k8s-version-919612) Found IP for machine: 192.168.72.240
	I0429 20:05:55.230889   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has current primary IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230898   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserving static IP address...
	I0429 20:05:55.231252   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.231287   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | skip adding static IP to network mk-old-k8s-version-919612 - found existing host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"}
	I0429 20:05:55.231305   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserved static IP address: 192.168.72.240
	I0429 20:05:55.231319   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting for SSH to be available...
	I0429 20:05:55.231335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Getting to WaitForSSH function...
	I0429 20:05:55.233198   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.233500   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH client type: external
	I0429 20:05:55.233671   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa (-rw-------)
	I0429 20:05:55.233706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:55.233730   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | About to run SSH command:
	I0429 20:05:55.233747   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | exit 0
	I0429 20:05:55.354242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:55.354584   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetConfigRaw
	I0429 20:05:55.355221   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.357791   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.358276   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358564   66615 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json ...
	I0429 20:05:55.358786   66615 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:55.358807   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:55.359037   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.361536   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.361861   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.361885   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.362048   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.362247   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362416   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362568   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.362733   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.362930   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.362943   66615 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:55.462364   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:55.462388   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462632   66615 buildroot.go:166] provisioning hostname "old-k8s-version-919612"
	I0429 20:05:55.462669   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462852   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.465335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465674   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.465706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465836   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.466034   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466208   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466366   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.466525   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.466729   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.466745   66615 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-919612 && echo "old-k8s-version-919612" | sudo tee /etc/hostname
	I0429 20:05:55.596239   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-919612
	
	I0429 20:05:55.596281   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.599221   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599575   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.599606   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599770   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.599970   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600122   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600316   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.600498   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.600667   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.600690   66615 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-919612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-919612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-919612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:55.716588   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:55.716621   66615 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:55.716647   66615 buildroot.go:174] setting up certificates
	I0429 20:05:55.716658   66615 provision.go:84] configureAuth start
	I0429 20:05:55.716671   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.716956   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.719569   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.719919   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.719956   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.720095   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.722484   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.722876   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.722912   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.723036   66615 provision.go:143] copyHostCerts
	I0429 20:05:55.723087   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:55.723097   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:55.723158   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:55.723253   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:55.723262   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:55.723280   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:55.723336   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:55.723342   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:55.723358   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:55.723404   66615 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-919612 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-919612]
	I0429 20:05:55.878639   66615 provision.go:177] copyRemoteCerts
	I0429 20:05:55.878724   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:55.878750   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.881746   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882306   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.882358   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882540   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.882743   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.882986   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.883139   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:55.973158   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:56.003094   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0429 20:05:56.031670   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:56.059049   66615 provision.go:87] duration metric: took 342.376371ms to configureAuth
	I0429 20:05:56.059091   66615 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:56.059335   66615 config.go:182] Loaded profile config "old-k8s-version-919612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 20:05:56.059441   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.062416   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.062887   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.062921   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.063082   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.063322   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063521   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063688   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.063901   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.064066   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.064082   66615 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:56.342484   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:56.342511   66615 machine.go:97] duration metric: took 983.711183ms to provisionDockerMachine
	I0429 20:05:56.342525   66615 start.go:293] postStartSetup for "old-k8s-version-919612" (driver="kvm2")
	I0429 20:05:56.342540   66615 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:56.342589   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.342931   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:56.342983   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.345399   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345710   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.345731   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345869   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.346047   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.346233   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.346418   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.431189   66615 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:56.435878   66615 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:56.435903   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:56.435983   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:56.436086   66615 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:56.436170   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:56.445841   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:56.472683   66615 start.go:296] duration metric: took 130.146591ms for postStartSetup
	I0429 20:05:56.472715   66615 fix.go:56] duration metric: took 21.31705375s for fixHost
	I0429 20:05:56.472736   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.475127   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.475492   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475624   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.475857   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476055   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476211   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.476378   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.476536   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.476547   66615 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:05:56.578999   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421156.548872445
	
	I0429 20:05:56.579028   66615 fix.go:216] guest clock: 1714421156.548872445
	I0429 20:05:56.579040   66615 fix.go:229] Guest: 2024-04-29 20:05:56.548872445 +0000 UTC Remote: 2024-04-29 20:05:56.472718546 +0000 UTC m=+226.572342220 (delta=76.153899ms)
	I0429 20:05:56.579068   66615 fix.go:200] guest clock delta is within tolerance: 76.153899ms
	I0429 20:05:56.579076   66615 start.go:83] releasing machines lock for "old-k8s-version-919612", held for 21.423436193s
	I0429 20:05:56.579111   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.579407   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:56.582338   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582673   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.582711   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582856   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583365   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583543   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583625   66615 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:56.583667   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.583765   66615 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:56.583805   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.586263   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586552   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586618   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586656   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586891   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.586953   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586989   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.587060   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587170   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.587240   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587310   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587458   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587462   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.587600   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.672678   66615 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:56.694175   66615 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:56.859009   66615 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:56.865723   66615 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:56.865798   66615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:56.885686   66615 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:56.885714   66615 start.go:494] detecting cgroup driver to use...
	I0429 20:05:56.885805   66615 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:56.909082   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:56.931583   66615 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:56.931646   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:56.953524   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:56.976170   66615 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:57.122813   66615 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:57.315725   66615 docker.go:233] disabling docker service ...
	I0429 20:05:57.315786   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:57.333927   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:57.350022   66615 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:57.525787   66615 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:57.685802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:57.703246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:57.730558   66615 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 20:05:57.730618   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.747081   66615 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:57.747133   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.760168   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.773553   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.787609   66615 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:57.800532   66615 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:57.813582   66615 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:57.813669   66615 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:57.832224   66615 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:57.844783   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:57.991666   66615 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:58.183635   66615 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:58.183718   66615 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:58.189441   66615 start.go:562] Will wait 60s for crictl version
	I0429 20:05:58.189509   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:05:58.194049   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:58.250751   66615 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:58.250839   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.292368   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.336121   66615 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0429 20:05:58.337389   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:58.340707   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341125   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:58.341153   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341387   66615 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:58.346434   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:58.361081   66615 kubeadm.go:877] updating cluster {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:58.361242   66615 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 20:05:58.361307   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:58.414304   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:05:58.414366   66615 ssh_runner.go:195] Run: which lz4
	I0429 20:05:58.420584   66615 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:05:58.425682   66615 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:05:58.425712   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0429 20:05:56.606748   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Start
	I0429 20:05:56.606929   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring networks are active...
	I0429 20:05:56.607627   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring network default is active
	I0429 20:05:56.608028   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring network mk-default-k8s-diff-port-866143 is active
	I0429 20:05:56.608557   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Getting domain xml...
	I0429 20:05:56.609325   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Creating domain...
	I0429 20:05:57.911657   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting to get IP...
	I0429 20:05:57.912705   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:57.913118   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:57.913211   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:57.913104   67743 retry.go:31] will retry after 298.590493ms: waiting for machine to come up
	I0429 20:05:58.213730   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.214424   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.214578   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:58.214487   67743 retry.go:31] will retry after 375.439886ms: waiting for machine to come up
	I0429 20:05:58.592145   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.592671   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.592700   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:58.592626   67743 retry.go:31] will retry after 432.890106ms: waiting for machine to come up
	I0429 20:05:59.027344   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.027782   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.027812   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:59.027732   67743 retry.go:31] will retry after 547.616894ms: waiting for machine to come up
	I0429 20:05:59.576555   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.577116   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.577140   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:59.577058   67743 retry.go:31] will retry after 662.088326ms: waiting for machine to come up
	I0429 20:06:00.240907   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.241712   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.241744   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:00.241667   67743 retry.go:31] will retry after 691.874394ms: waiting for machine to come up
	I0429 20:05:57.816218   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.079778   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:01.079817   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:01.079832   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.112008   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:01.112043   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:01.316358   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.322401   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:01.322437   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:01.815974   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.825156   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:01.825219   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:02.316473   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:02.328725   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:02.328763   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:02.816674   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:02.822826   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:02.822866   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:03.315863   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:03.323314   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:03.323366   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:03.816529   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:03.822521   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:03.822556   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:04.316336   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:04.325750   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0429 20:06:04.337308   66218 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:04.337348   66218 api_server.go:131] duration metric: took 7.02164287s to wait for apiserver health ...
	I0429 20:06:04.337361   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:06:04.337370   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:04.505344   66218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:00.520217   66615 crio.go:462] duration metric: took 2.099664395s to copy over tarball
	I0429 20:06:00.520314   66615 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:04.082476   66615 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562128598s)
	I0429 20:06:04.082527   66615 crio.go:469] duration metric: took 3.562271241s to extract the tarball
	I0429 20:06:04.082538   66615 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:04.129338   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:04.177683   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:06:04.177709   66615 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:06:04.177762   66615 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.177798   66615 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.177817   66615 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.177834   66615 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.177835   66615 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.177783   66615 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.177897   66615 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0429 20:06:04.177972   66615 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179282   66615 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.179360   66615 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.179361   66615 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.179320   66615 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.179331   66615 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.179299   66615 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0429 20:06:04.323997   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376145   66615 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0429 20:06:04.376210   66615 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376261   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.381592   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.420565   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0429 20:06:04.440670   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0429 20:06:04.461763   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.499283   66615 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0429 20:06:04.499347   66615 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0429 20:06:04.499404   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513860   66615 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0429 20:06:04.513900   66615 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.513946   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513988   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0429 20:06:04.548990   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.556713   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.556942   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.556965   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0429 20:06:04.566227   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.598982   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.656930   66615 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0429 20:06:04.656980   66615 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.657038   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.724922   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0429 20:06:04.725179   66615 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0429 20:06:04.725218   66615 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.725262   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732375   66615 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0429 20:06:04.732429   66615 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.732482   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732492   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.732483   66615 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0429 20:06:04.732669   66615 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.732726   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.735419   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.739785   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.742496   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.834684   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0429 20:06:04.834754   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0429 20:06:04.834811   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0429 20:06:04.847076   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0429 20:06:00.935382   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.935935   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.935979   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:00.935902   67743 retry.go:31] will retry after 1.024898519s: waiting for machine to come up
	I0429 20:06:01.962446   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:01.963109   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:01.963140   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:01.963059   67743 retry.go:31] will retry after 1.19225855s: waiting for machine to come up
	I0429 20:06:03.157257   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:03.157781   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:03.157843   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:03.157738   67743 retry.go:31] will retry after 1.699779549s: waiting for machine to come up
	I0429 20:06:04.859190   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:04.859622   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:04.859670   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:04.859565   67743 retry.go:31] will retry after 2.307475318s: waiting for machine to come up
	I0429 20:06:04.671477   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:04.684650   66218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:04.718146   66218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:04.908181   66218 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:04.908213   66218 system_pods.go:61] "coredns-7db6d8ff4d-d4kwk" [215ff4b8-3ae5-49a7-8a9f-6acb4d176b93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:04.908223   66218 system_pods.go:61] "etcd-no-preload-456788" [3ec7e177-1b68-4bff-aa4d-803f5346e1be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:04.908231   66218 system_pods.go:61] "kube-apiserver-no-preload-456788" [5e8bf0b0-9669-4f0c-8da1-523589158b16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:04.908236   66218 system_pods.go:61] "kube-controller-manager-no-preload-456788" [515363f7-bde1-4ba7-a5a9-6779f673afaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:04.908240   66218 system_pods.go:61] "kube-proxy-slnph" [29f503bf-ce19-425c-8174-2b8e7b27a424] Running
	I0429 20:06:04.908253   66218 system_pods.go:61] "kube-scheduler-no-preload-456788" [4f394af0-6452-49dd-9770-7c6bfcff3936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:04.908258   66218 system_pods.go:61] "metrics-server-569cc877fc-6mpnm" [5f183615-a243-410a-a524-ebdaa65e6400] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:04.908262   66218 system_pods.go:61] "storage-provisioner" [f74a777d-a3d7-4682-bad0-44bb993a2d43] Running
	I0429 20:06:04.908270   66218 system_pods.go:74] duration metric: took 190.098153ms to wait for pod list to return data ...
	I0429 20:06:04.908278   66218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:05.212876   66218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:05.212913   66218 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:05.212929   66218 node_conditions.go:105] duration metric: took 304.645545ms to run NodePressure ...
	I0429 20:06:05.212950   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:05.913252   66218 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:05.928914   66218 kubeadm.go:733] kubelet initialised
	I0429 20:06:05.928947   66218 kubeadm.go:734] duration metric: took 15.668535ms waiting for restarted kubelet to initialise ...
	I0429 20:06:05.928957   66218 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:05.937357   66218 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:05.091766   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:05.269730   66615 cache_images.go:92] duration metric: took 1.092006107s to LoadCachedImages
	W0429 20:06:05.269839   66615 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0429 20:06:05.269857   66615 kubeadm.go:928] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I0429 20:06:05.269988   66615 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-919612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:05.270088   66615 ssh_runner.go:195] Run: crio config
	I0429 20:06:05.322439   66615 cni.go:84] Creating CNI manager for ""
	I0429 20:06:05.322471   66615 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:05.322486   66615 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:05.322522   66615 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-919612 NodeName:old-k8s-version-919612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 20:06:05.322746   66615 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-919612"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:05.322810   66615 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 20:06:05.340981   66615 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:05.341058   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:05.357048   66615 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0429 20:06:05.384352   66615 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:05.407887   66615 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0429 20:06:05.431531   66615 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:05.437567   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:05.457652   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:05.610358   66615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:05.641538   66615 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612 for IP: 192.168.72.240
	I0429 20:06:05.641568   66615 certs.go:194] generating shared ca certs ...
	I0429 20:06:05.641583   66615 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:05.641758   66615 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:05.641831   66615 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:05.641843   66615 certs.go:256] generating profile certs ...
	I0429 20:06:05.641948   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.key
	I0429 20:06:05.642020   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key.5df5e618
	I0429 20:06:05.642083   66615 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key
	I0429 20:06:05.642256   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:05.642304   66615 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:05.642325   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:05.642364   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:05.642401   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:05.642435   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:05.642489   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:05.643156   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:05.691350   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:05.734434   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:05.773056   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:05.819778   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 20:06:05.868256   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:05.911589   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:05.957714   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 20:06:06.002120   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:06.039736   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:06.079636   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:06.118317   66615 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:06.145932   66615 ssh_runner.go:195] Run: openssl version
	I0429 20:06:06.152970   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:06.166609   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.171939   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.172033   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.179153   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:06.193491   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:06.207800   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214803   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214876   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.222154   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:06.236908   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:06.254197   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260797   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260863   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.267635   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:06.282727   66615 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:06.289580   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:06.301014   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:06.310503   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:06.318708   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:06.325718   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:06.332690   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:06.339914   66615 kubeadm.go:391] StartCluster: {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:06.340012   66615 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:06.340069   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.391511   66615 cri.go:89] found id: ""
	I0429 20:06:06.391618   66615 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:06.408955   66615 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:06.408985   66615 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:06.408991   66615 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:06.409060   66615 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:06.425276   66615 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:06.426397   66615 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-919612" does not appear in /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:06.427298   66615 kubeconfig.go:62] /home/jenkins/minikube-integration/18774-7754/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-919612" cluster setting kubeconfig missing "old-k8s-version-919612" context setting]
	I0429 20:06:06.428287   66615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:06.429908   66615 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:06.443630   66615 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.240
	I0429 20:06:06.443674   66615 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:06.443686   66615 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:06.443753   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.486251   66615 cri.go:89] found id: ""
	I0429 20:06:06.486339   66615 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:06.507136   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:06.523798   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:06.523828   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:06.523887   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:06.536668   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:06.536735   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:06.547800   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:06.560435   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:06.560517   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:06.572227   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.582772   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:06.582825   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.594168   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:06.605940   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:06.606013   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:06.621829   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:06.637520   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:06.779910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:07.921143   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.141191032s)
	I0429 20:06:07.921178   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.172381   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.276243   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.398312   66615 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:08.398424   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:08.899388   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.399344   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.898731   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:07.168679   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:07.169214   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:07.169264   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:07.169146   67743 retry.go:31] will retry after 2.050354993s: waiting for machine to come up
	I0429 20:06:09.221915   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:09.222545   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:09.222581   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:09.222449   67743 retry.go:31] will retry after 2.544889222s: waiting for machine to come up
	I0429 20:06:07.947247   66218 pod_ready.go:102] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:10.449364   66218 pod_ready.go:102] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:10.943731   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:10.943754   66218 pod_ready.go:81] duration metric: took 5.006367348s for pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:10.943763   66218 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.453825   66218 pod_ready.go:92] pod "etcd-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.453853   66218 pod_ready.go:81] duration metric: took 1.510082371s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.453865   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.462971   66218 pod_ready.go:92] pod "kube-apiserver-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.462997   66218 pod_ready.go:81] duration metric: took 9.123374ms for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.463011   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.471032   66218 pod_ready.go:92] pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.471066   66218 pod_ready.go:81] duration metric: took 8.024113ms for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.471077   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-slnph" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.478671   66218 pod_ready.go:92] pod "kube-proxy-slnph" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.478695   66218 pod_ready.go:81] duration metric: took 7.609313ms for pod "kube-proxy-slnph" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.478706   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.542851   66218 pod_ready.go:92] pod "kube-scheduler-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.542875   66218 pod_ready.go:81] duration metric: took 64.16109ms for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.542888   66218 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:10.399055   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:10.898742   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.399250   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.898511   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.399301   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.899399   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.399242   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.899417   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.398526   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.898976   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.768576   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:11.768967   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:11.769003   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:11.768924   67743 retry.go:31] will retry after 3.829285986s: waiting for machine to come up
	I0429 20:06:17.032004   65980 start.go:364] duration metric: took 56.727982697s to acquireMachinesLock for "embed-certs-161370"
	I0429 20:06:17.032074   65980 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:06:17.032085   65980 fix.go:54] fixHost starting: 
	I0429 20:06:17.032452   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:17.032485   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:17.050767   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0429 20:06:17.051181   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:17.051655   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:06:17.051680   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:17.052002   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:17.052188   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:17.052363   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:06:17.053975   65980 fix.go:112] recreateIfNeeded on embed-certs-161370: state=Stopped err=<nil>
	I0429 20:06:17.054002   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	W0429 20:06:17.054167   65980 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:06:17.056054   65980 out.go:177] * Restarting existing kvm2 VM for "embed-certs-161370" ...
	I0429 20:06:14.550615   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:17.050288   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:17.057452   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Start
	I0429 20:06:17.057630   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring networks are active...
	I0429 20:06:17.058381   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring network default is active
	I0429 20:06:17.058680   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring network mk-embed-certs-161370 is active
	I0429 20:06:17.059024   65980 main.go:141] libmachine: (embed-certs-161370) Getting domain xml...
	I0429 20:06:17.059697   65980 main.go:141] libmachine: (embed-certs-161370) Creating domain...
	I0429 20:06:15.599423   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.599897   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has current primary IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.599915   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Found IP for machine: 192.168.61.106
	I0429 20:06:15.599929   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Reserving static IP address...
	I0429 20:06:15.600318   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Reserved static IP address: 192.168.61.106
	I0429 20:06:15.600360   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-866143", mac: "52:54:00:af:de:09", ip: "192.168.61.106"} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.600375   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for SSH to be available...
	I0429 20:06:15.600405   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | skip adding static IP to network mk-default-k8s-diff-port-866143 - found existing host DHCP lease matching {name: "default-k8s-diff-port-866143", mac: "52:54:00:af:de:09", ip: "192.168.61.106"}
	I0429 20:06:15.600423   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Getting to WaitForSSH function...
	I0429 20:06:15.602983   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.603379   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.603414   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.603581   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Using SSH client type: external
	I0429 20:06:15.603611   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa (-rw-------)
	I0429 20:06:15.603675   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:06:15.603701   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | About to run SSH command:
	I0429 20:06:15.603733   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | exit 0
	I0429 20:06:15.734933   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | SSH cmd err, output: <nil>: 
	I0429 20:06:15.735306   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetConfigRaw
	I0429 20:06:15.735918   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:15.738878   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.739349   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.739385   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.739745   66875 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:06:15.739943   66875 machine.go:94] provisionDockerMachine start ...
	I0429 20:06:15.739966   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:15.740215   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.742731   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.743068   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.743097   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.743253   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.743448   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.743592   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.743729   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.743859   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.744066   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.744080   66875 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:06:15.855258   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:06:15.855292   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:15.855585   66875 buildroot.go:166] provisioning hostname "default-k8s-diff-port-866143"
	I0429 20:06:15.855604   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:15.855792   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.858278   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.858644   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.858672   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.858802   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.858996   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.859179   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.859327   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.859498   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.859667   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.859682   66875 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-866143 && echo "default-k8s-diff-port-866143" | sudo tee /etc/hostname
	I0429 20:06:15.986031   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-866143
	
	I0429 20:06:15.986094   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.989211   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.989633   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.989666   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.989858   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.990078   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.990281   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.990441   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.990591   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.990746   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.990763   66875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-866143' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-866143/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-866143' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:06:16.119358   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:06:16.119389   66875 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:06:16.119420   66875 buildroot.go:174] setting up certificates
	I0429 20:06:16.119431   66875 provision.go:84] configureAuth start
	I0429 20:06:16.119442   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:16.119741   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:16.122611   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.122991   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.123016   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.123180   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.125378   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.125673   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.125713   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.125805   66875 provision.go:143] copyHostCerts
	I0429 20:06:16.125883   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:06:16.125896   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:06:16.125963   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:06:16.126112   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:06:16.126125   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:06:16.126152   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:06:16.126234   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:06:16.126245   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:06:16.126270   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:06:16.126348   66875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-866143 san=[127.0.0.1 192.168.61.106 default-k8s-diff-port-866143 localhost minikube]
	I0429 20:06:16.280583   66875 provision.go:177] copyRemoteCerts
	I0429 20:06:16.280641   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:06:16.280665   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.283452   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.283760   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.283800   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.283999   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.284175   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.284335   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.284428   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:16.374564   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:06:16.408695   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0429 20:06:16.441975   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:06:16.470921   66875 provision.go:87] duration metric: took 351.479703ms to configureAuth
	I0429 20:06:16.470946   66875 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:06:16.471124   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:16.471205   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.473799   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.474105   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.474139   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.474291   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.474502   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.474692   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.474830   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.474995   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:16.475152   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:16.475167   66875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:06:16.774044   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:06:16.774093   66875 machine.go:97] duration metric: took 1.034135495s to provisionDockerMachine
	I0429 20:06:16.774108   66875 start.go:293] postStartSetup for "default-k8s-diff-port-866143" (driver="kvm2")
	I0429 20:06:16.774123   66875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:06:16.774148   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:16.774509   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:06:16.774539   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.777163   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.777603   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.777639   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.777779   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.777949   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.778109   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.778259   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:16.866104   66875 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:06:16.870760   66875 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:06:16.870780   66875 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:06:16.870839   66875 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:06:16.870916   66875 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:06:16.871003   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:06:16.881137   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:16.911284   66875 start.go:296] duration metric: took 137.163661ms for postStartSetup
	I0429 20:06:16.911318   66875 fix.go:56] duration metric: took 20.332102679s for fixHost
	I0429 20:06:16.911337   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.914440   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.914810   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.914838   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.915087   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.915287   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.915511   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.915692   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.915886   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:16.916034   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:16.916045   66875 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:06:17.031867   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421177.003309274
	
	I0429 20:06:17.031892   66875 fix.go:216] guest clock: 1714421177.003309274
	I0429 20:06:17.031900   66875 fix.go:229] Guest: 2024-04-29 20:06:17.003309274 +0000 UTC Remote: 2024-04-29 20:06:16.911322778 +0000 UTC m=+211.453402116 (delta=91.986496ms)
	I0429 20:06:17.031921   66875 fix.go:200] guest clock delta is within tolerance: 91.986496ms
	I0429 20:06:17.031928   66875 start.go:83] releasing machines lock for "default-k8s-diff-port-866143", held for 20.452741912s
	I0429 20:06:17.031957   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.032261   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:17.035096   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.035467   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.035497   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.035620   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036246   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036425   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036515   66875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:06:17.036569   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:17.036698   66875 ssh_runner.go:195] Run: cat /version.json
	I0429 20:06:17.036726   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:17.039300   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039595   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039813   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.039848   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039907   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:17.039984   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.040017   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.040069   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:17.040172   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:17.040230   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:17.040329   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:17.040382   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:17.040483   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:17.040636   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:17.137510   66875 ssh_runner.go:195] Run: systemctl --version
	I0429 20:06:17.160834   66875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:06:17.320792   66875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:06:17.328367   66875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:06:17.328448   66875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:06:17.349698   66875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:06:17.349724   66875 start.go:494] detecting cgroup driver to use...
	I0429 20:06:17.349807   66875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:06:17.372156   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:06:17.388142   66875 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:06:17.388206   66875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:06:17.406108   66875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:06:17.422323   66875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:06:17.555079   66875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:06:17.727126   66875 docker.go:233] disabling docker service ...
	I0429 20:06:17.727194   66875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:06:17.743136   66875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:06:17.757045   66875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:06:17.885705   66875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:06:18.021993   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:06:18.039020   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:06:18.063267   66875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:06:18.063330   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.076473   66875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:06:18.076545   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.089566   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.102912   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.116940   66875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:06:18.130940   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.150505   66875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.177724   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.191088   66875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:06:18.203560   66875 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:06:18.203635   66875 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:06:18.221087   66875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:06:18.233719   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:18.383406   66875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:06:18.543941   66875 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:06:18.544029   66875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:06:18.550828   66875 start.go:562] Will wait 60s for crictl version
	I0429 20:06:18.550891   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:06:18.556158   66875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:06:18.607004   66875 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:06:18.607083   66875 ssh_runner.go:195] Run: crio --version
	I0429 20:06:18.638282   66875 ssh_runner.go:195] Run: crio --version
	I0429 20:06:18.674135   66875 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:06:15.399474   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:15.899352   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.899106   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.899205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.399351   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.899319   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.399303   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.898824   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.675590   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:18.678673   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:18.679055   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:18.679096   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:18.679272   66875 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0429 20:06:18.685110   66875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:18.705804   66875 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:06:18.705967   66875 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:06:18.706036   66875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:18.750754   66875 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:06:18.750823   66875 ssh_runner.go:195] Run: which lz4
	I0429 20:06:18.755893   66875 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:06:18.760892   66875 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:06:18.760921   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:06:19.055680   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:21.552080   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:18.301855   65980 main.go:141] libmachine: (embed-certs-161370) Waiting to get IP...
	I0429 20:06:18.302804   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.303231   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.303273   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.303198   67921 retry.go:31] will retry after 279.123731ms: waiting for machine to come up
	I0429 20:06:18.584013   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.584661   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.584703   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.584630   67921 retry.go:31] will retry after 239.910483ms: waiting for machine to come up
	I0429 20:06:18.825978   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.826393   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.826425   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.826349   67921 retry.go:31] will retry after 312.324444ms: waiting for machine to come up
	I0429 20:06:19.139999   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:19.140583   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:19.140611   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:19.140535   67921 retry.go:31] will retry after 498.525047ms: waiting for machine to come up
	I0429 20:06:19.640244   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:19.640797   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:19.640828   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:19.640756   67921 retry.go:31] will retry after 479.301061ms: waiting for machine to come up
	I0429 20:06:20.121396   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:20.121982   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:20.122015   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:20.121941   67921 retry.go:31] will retry after 706.389673ms: waiting for machine to come up
	I0429 20:06:20.829691   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:20.830191   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:20.830247   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:20.830166   67921 retry.go:31] will retry after 1.145397308s: waiting for machine to come up
	I0429 20:06:21.977290   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:21.977747   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:21.977779   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:21.977691   67921 retry.go:31] will retry after 955.977029ms: waiting for machine to come up
	I0429 20:06:20.399233   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.898571   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.398855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.898885   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.399328   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.398965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.899248   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.398833   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.899039   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.561047   66875 crio.go:462] duration metric: took 1.805186908s to copy over tarball
	I0429 20:06:20.561137   66875 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:23.264543   66875 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.703371921s)
	I0429 20:06:23.264573   66875 crio.go:469] duration metric: took 2.7034954s to extract the tarball
	I0429 20:06:23.264581   66875 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:23.303558   66875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:23.356825   66875 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 20:06:23.356854   66875 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:06:23.356873   66875 kubeadm.go:928] updating node { 192.168.61.106 8444 v1.30.0 crio true true} ...
	I0429 20:06:23.357007   66875 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-866143 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:23.357105   66875 ssh_runner.go:195] Run: crio config
	I0429 20:06:23.414195   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:06:23.414225   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:23.414237   66875 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:23.414267   66875 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.106 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-866143 NodeName:default-k8s-diff-port-866143 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:06:23.414459   66875 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.106
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-866143"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:23.414524   66875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:06:23.425977   66875 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:23.426089   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:23.437270   66875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0429 20:06:23.457613   66875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:23.479383   66875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0429 20:06:23.509517   66875 ssh_runner.go:195] Run: grep 192.168.61.106	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:23.514202   66875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:23.528721   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:23.666941   66875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:23.687710   66875 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143 for IP: 192.168.61.106
	I0429 20:06:23.687745   66875 certs.go:194] generating shared ca certs ...
	I0429 20:06:23.687768   66875 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:23.687952   66875 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:23.688005   66875 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:23.688020   66875 certs.go:256] generating profile certs ...
	I0429 20:06:23.688168   66875 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.key
	I0429 20:06:23.688260   66875 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.key.5d7fbd4b
	I0429 20:06:23.688318   66875 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.key
	I0429 20:06:23.688481   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:23.688532   66875 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:23.688548   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:23.688592   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:23.688628   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:23.688663   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:23.688722   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:23.689611   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:23.743834   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:23.783115   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:23.819086   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:23.850794   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0429 20:06:23.882477   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:23.918607   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:23.947837   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:06:23.977241   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:24.005902   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:24.034910   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:24.064119   66875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:24.083879   66875 ssh_runner.go:195] Run: openssl version
	I0429 20:06:24.090651   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:24.104929   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.110955   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.111034   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.117914   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:24.131076   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:24.144790   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.150842   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.150926   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.157842   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:24.171737   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:24.186164   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.191924   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.191995   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.199385   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:24.213392   66875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:24.219369   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:24.226784   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:24.234655   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:24.242406   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:24.249904   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:24.257400   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:24.264165   66875 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:24.264290   66875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:24.264353   66875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:24.310126   66875 cri.go:89] found id: ""
	I0429 20:06:24.310197   66875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:24.322134   66875 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:24.322155   66875 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:24.322160   66875 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:24.322223   66875 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:24.337713   66875 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:24.339184   66875 kubeconfig.go:125] found "default-k8s-diff-port-866143" server: "https://192.168.61.106:8444"
	I0429 20:06:24.342237   66875 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:24.353500   66875 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.106
	I0429 20:06:24.353545   66875 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:24.353560   66875 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:24.353627   66875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:24.399835   66875 cri.go:89] found id: ""
	I0429 20:06:24.399918   66875 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:24.426456   66875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:24.440261   66875 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:24.440282   66875 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:24.440376   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0429 20:06:24.450699   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:24.450766   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:24.462870   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0429 20:06:24.474894   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:24.474961   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:24.488607   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0429 20:06:24.499626   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:24.499685   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:24.514156   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0429 20:06:24.525958   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:24.526018   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:24.537063   66875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:24.548503   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:24.687916   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:24.051367   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:26.550970   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:22.935362   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:22.935797   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:22.935827   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:22.935746   67921 retry.go:31] will retry after 1.25494649s: waiting for machine to come up
	I0429 20:06:24.192017   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:24.192613   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:24.192641   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:24.192556   67921 retry.go:31] will retry after 1.641885834s: waiting for machine to come up
	I0429 20:06:25.836686   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:25.837170   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:25.837193   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:25.837125   67921 retry.go:31] will retry after 2.794216099s: waiting for machine to come up
	I0429 20:06:25.398515   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:25.898944   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.399360   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.899294   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.399520   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.899434   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.398734   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.898479   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.399413   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.899236   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.234143   66875 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.546180467s)
	I0429 20:06:26.234181   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.502030   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.577778   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.689836   66875 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:26.689982   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.190231   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.690207   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.729434   66875 api_server.go:72] duration metric: took 1.039599386s to wait for apiserver process to appear ...
	I0429 20:06:27.729473   66875 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:06:27.729497   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:27.730016   66875 api_server.go:269] stopped: https://192.168.61.106:8444/healthz: Get "https://192.168.61.106:8444/healthz": dial tcp 192.168.61.106:8444: connect: connection refused
	I0429 20:06:28.230353   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:28.551049   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:31.051387   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:31.411151   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:31.411188   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:31.411205   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:31.424074   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:31.424106   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:31.729916   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:31.737269   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:31.737299   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:32.229834   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:32.237900   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:32.237935   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:32.730529   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:32.735043   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 200:
	ok
	I0429 20:06:32.743999   66875 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:32.744026   66875 api_server.go:131] duration metric: took 5.014546615s to wait for apiserver health ...
	I0429 20:06:32.744035   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:06:32.744041   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:32.745889   66875 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:28.633451   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:28.633950   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:28.633979   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:28.633906   67921 retry.go:31] will retry after 2.251092878s: waiting for machine to come up
	I0429 20:06:30.887722   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:30.888251   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:30.888283   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:30.888208   67921 retry.go:31] will retry after 2.941721217s: waiting for machine to come up
	I0429 20:06:32.747198   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:32.760578   66875 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:32.786719   66875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:32.797795   66875 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:32.797830   66875 system_pods.go:61] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:32.797838   66875 system_pods.go:61] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:32.797844   66875 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:32.797854   66875 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:32.797859   66875 system_pods.go:61] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 20:06:32.797866   66875 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:32.797873   66875 system_pods.go:61] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:32.797880   66875 system_pods.go:61] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 20:06:32.797888   66875 system_pods.go:74] duration metric: took 11.14839ms to wait for pod list to return data ...
	I0429 20:06:32.797902   66875 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:32.801888   66875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:32.801909   66875 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:32.801918   66875 node_conditions.go:105] duration metric: took 4.010782ms to run NodePressure ...
	I0429 20:06:32.801934   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:33.088679   66875 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:33.094165   66875 kubeadm.go:733] kubelet initialised
	I0429 20:06:33.094185   66875 kubeadm.go:734] duration metric: took 5.479589ms waiting for restarted kubelet to initialise ...
	I0429 20:06:33.094192   66875 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:33.101524   66875 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.106879   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.106911   66875 pod_ready.go:81] duration metric: took 5.352162ms for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.106923   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.106946   66875 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.111446   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.111469   66875 pod_ready.go:81] duration metric: took 4.507858ms for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.111478   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.111483   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.115613   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.115643   66875 pod_ready.go:81] duration metric: took 4.152743ms for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.115654   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.115663   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.191660   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.191695   66875 pod_ready.go:81] duration metric: took 76.012388ms for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.191707   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.191713   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.592489   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-proxy-zddtx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.592522   66875 pod_ready.go:81] duration metric: took 400.801861ms for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.592535   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-proxy-zddtx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.592544   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.990624   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.990655   66875 pod_ready.go:81] duration metric: took 398.101779ms for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.990667   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.990673   66875 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:34.391120   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:34.391148   66875 pod_ready.go:81] duration metric: took 400.467456ms for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:34.391165   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:34.391173   66875 pod_ready.go:38] duration metric: took 1.296972775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:34.391191   66875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:06:34.408817   66875 ops.go:34] apiserver oom_adj: -16
	I0429 20:06:34.408845   66875 kubeadm.go:591] duration metric: took 10.086677852s to restartPrimaryControlPlane
	I0429 20:06:34.408856   66875 kubeadm.go:393] duration metric: took 10.144698168s to StartCluster
	I0429 20:06:34.408876   66875 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:34.408961   66875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:34.411093   66875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:34.411379   66875 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:06:34.413055   66875 out.go:177] * Verifying Kubernetes components...
	I0429 20:06:34.411518   66875 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:06:34.411607   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:34.414229   66875 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414239   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:34.414261   66875 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-866143"
	I0429 20:06:34.414238   66875 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414232   66875 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414341   66875 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.414355   66875 addons.go:243] addon metrics-server should already be in state true
	I0429 20:06:34.414382   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.414381   66875 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.414396   66875 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:06:34.414439   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.414650   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414677   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.414746   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414758   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.414890   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414923   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.433279   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I0429 20:06:34.433827   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.434444   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.434474   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.434873   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.435436   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.435483   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.435739   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
	I0429 20:06:34.435746   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0429 20:06:34.436117   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.436245   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.436638   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.436678   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.436734   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.436747   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.437011   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.437057   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.437218   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.437601   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.437630   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.441092   66875 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.441118   66875 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:06:34.441146   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.441550   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.441582   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.451571   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0429 20:06:34.452041   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.452627   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.452650   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.453080   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.453401   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.455145   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0429 20:06:34.455335   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.457339   66875 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:34.455992   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.456826   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I0429 20:06:34.458912   66875 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:06:34.458925   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:06:34.458942   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.459155   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.459818   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.459836   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.460050   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.460068   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.460196   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.460406   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.460450   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.461005   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.461051   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.462529   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.462624   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.464140   66875 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:06:30.398730   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:30.898542   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.399309   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.898751   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.399374   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.899262   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.398723   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.899281   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.399356   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.899305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.463014   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.463255   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.465585   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.465598   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:06:34.465623   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:06:34.465652   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.465703   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.465892   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.466043   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.468951   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.469342   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.469407   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.469645   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.469817   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.469984   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.470137   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.484411   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0429 20:06:34.484864   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.485366   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.485396   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.485759   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.485937   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.487715   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.487962   66875 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:06:34.487975   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:06:34.487989   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.490407   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.490724   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.490748   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.490890   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.491045   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.491146   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.491274   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.618088   66875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:34.638582   66875 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-866143" to be "Ready" ...
	I0429 20:06:34.729046   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:06:34.729633   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:06:34.729649   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:06:34.752200   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:06:34.770107   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:06:34.770143   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:06:34.847081   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:06:34.847117   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:06:34.889992   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:06:35.821090   66875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091986938s)
	I0429 20:06:35.821127   66875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.068905753s)
	I0429 20:06:35.821145   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821150   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821157   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821162   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821490   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821505   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821514   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821524   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821528   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821534   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821549   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821540   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821902   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821923   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821936   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.822007   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.822024   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.828303   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.828348   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.828591   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.828606   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.828632   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.843540   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.843566   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.843860   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.843877   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.843886   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.843894   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.844127   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.844170   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.844188   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.844203   66875 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-866143"
	I0429 20:06:35.846214   66875 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0429 20:06:33.549917   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:35.550564   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:33.831181   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:33.831552   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:33.831581   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:33.831506   67921 retry.go:31] will retry after 5.040485428s: waiting for machine to come up
	I0429 20:06:35.399419   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.899244   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.398934   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.898847   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.399273   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.899102   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.398748   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.399524   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.898813   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.847674   66875 addons.go:505] duration metric: took 1.436173952s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0429 20:06:36.641963   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:38.642738   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:38.873188   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.873625   65980 main.go:141] libmachine: (embed-certs-161370) Found IP for machine: 192.168.50.184
	I0429 20:06:38.873653   65980 main.go:141] libmachine: (embed-certs-161370) Reserving static IP address...
	I0429 20:06:38.873669   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has current primary IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.874037   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "embed-certs-161370", mac: "52:54:00:e6:05:1f", ip: "192.168.50.184"} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:38.874091   65980 main.go:141] libmachine: (embed-certs-161370) Reserved static IP address: 192.168.50.184
	I0429 20:06:38.874113   65980 main.go:141] libmachine: (embed-certs-161370) DBG | skip adding static IP to network mk-embed-certs-161370 - found existing host DHCP lease matching {name: "embed-certs-161370", mac: "52:54:00:e6:05:1f", ip: "192.168.50.184"}
	I0429 20:06:38.874132   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Getting to WaitForSSH function...
	I0429 20:06:38.874151   65980 main.go:141] libmachine: (embed-certs-161370) Waiting for SSH to be available...
	I0429 20:06:38.875891   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.876205   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:38.876237   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.876401   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Using SSH client type: external
	I0429 20:06:38.876425   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa (-rw-------)
	I0429 20:06:38.876455   65980 main.go:141] libmachine: (embed-certs-161370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:06:38.876475   65980 main.go:141] libmachine: (embed-certs-161370) DBG | About to run SSH command:
	I0429 20:06:38.876486   65980 main.go:141] libmachine: (embed-certs-161370) DBG | exit 0
	I0429 20:06:39.006684   65980 main.go:141] libmachine: (embed-certs-161370) DBG | SSH cmd err, output: <nil>: 
	I0429 20:06:39.007072   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetConfigRaw
	I0429 20:06:39.007701   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:39.010189   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.010539   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.010577   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.010783   65980 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/config.json ...
	I0429 20:06:39.010970   65980 machine.go:94] provisionDockerMachine start ...
	I0429 20:06:39.010986   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:39.011196   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.013422   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.013832   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.013862   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.013986   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.014183   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.014377   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.014528   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.014710   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.014868   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.014878   65980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:06:39.119151   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:06:39.119183   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.119425   65980 buildroot.go:166] provisioning hostname "embed-certs-161370"
	I0429 20:06:39.119449   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.119606   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.122418   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.122725   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.122755   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.122894   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.123087   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.123235   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.123371   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.123547   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.123719   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.123734   65980 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-161370 && echo "embed-certs-161370" | sudo tee /etc/hostname
	I0429 20:06:39.247323   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-161370
	
	I0429 20:06:39.247360   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.250202   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.250594   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.250623   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.250761   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.250956   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.251158   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.251354   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.251536   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.251724   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.251746   65980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-161370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-161370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-161370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:06:39.370366   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:06:39.370395   65980 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:06:39.370415   65980 buildroot.go:174] setting up certificates
	I0429 20:06:39.370429   65980 provision.go:84] configureAuth start
	I0429 20:06:39.370441   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.370754   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:39.373600   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.373977   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.374011   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.374305   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.376654   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.376999   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.377032   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.377156   65980 provision.go:143] copyHostCerts
	I0429 20:06:39.377217   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:06:39.377228   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:06:39.377279   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:06:39.377367   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:06:39.377375   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:06:39.377393   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:06:39.377446   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:06:39.377453   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:06:39.377470   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:06:39.377523   65980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.embed-certs-161370 san=[127.0.0.1 192.168.50.184 embed-certs-161370 localhost minikube]
	I0429 20:06:39.441865   65980 provision.go:177] copyRemoteCerts
	I0429 20:06:39.441931   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:06:39.441954   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.445189   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.445633   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.445677   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.445918   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.446166   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.446364   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.446521   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:39.535703   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:06:39.571033   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0429 20:06:39.604181   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:06:39.639250   65980 provision.go:87] duration metric: took 268.808275ms to configureAuth
	I0429 20:06:39.639339   65980 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:06:39.639575   65980 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:39.639668   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.642544   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.642975   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.643006   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.643146   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.643348   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.643507   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.643671   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.643838   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.644011   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.644039   65980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:06:39.974134   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:06:39.974168   65980 machine.go:97] duration metric: took 963.184467ms to provisionDockerMachine
	I0429 20:06:39.974186   65980 start.go:293] postStartSetup for "embed-certs-161370" (driver="kvm2")
	I0429 20:06:39.974201   65980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:06:39.974229   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:39.974601   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:06:39.974636   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.977843   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.978295   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.978328   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.978528   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.978768   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.978939   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.979144   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.066379   65980 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:06:40.071720   65980 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:06:40.071742   65980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:06:40.071798   65980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:06:40.071875   65980 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:06:40.071965   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:06:40.082556   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:40.112774   65980 start.go:296] duration metric: took 138.571139ms for postStartSetup
	I0429 20:06:40.112827   65980 fix.go:56] duration metric: took 23.080734046s for fixHost
	I0429 20:06:40.112859   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.115931   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.116414   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.116448   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.116643   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.116859   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.117026   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.117169   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.117358   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:40.117560   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:40.117576   65980 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:06:40.223697   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421200.206855033
	
	I0429 20:06:40.223722   65980 fix.go:216] guest clock: 1714421200.206855033
	I0429 20:06:40.223732   65980 fix.go:229] Guest: 2024-04-29 20:06:40.206855033 +0000 UTC Remote: 2024-04-29 20:06:40.112832003 +0000 UTC m=+362.399028562 (delta=94.02303ms)
	I0429 20:06:40.223777   65980 fix.go:200] guest clock delta is within tolerance: 94.02303ms
	I0429 20:06:40.223782   65980 start.go:83] releasing machines lock for "embed-certs-161370", held for 23.191744513s
	I0429 20:06:40.223804   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.224106   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:40.226904   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.227299   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.227328   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.227462   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.227955   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.228117   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.228199   65980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:06:40.228238   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.228353   65980 ssh_runner.go:195] Run: cat /version.json
	I0429 20:06:40.228378   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.230943   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231151   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231370   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.231401   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231585   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.231595   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.231629   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231794   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.231806   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.231982   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.232000   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.232182   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.232197   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.232303   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.337533   65980 ssh_runner.go:195] Run: systemctl --version
	I0429 20:06:40.347252   65980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:06:40.494668   65980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:06:40.502707   65980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:06:40.502788   65980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:06:40.522261   65980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:06:40.522298   65980 start.go:494] detecting cgroup driver to use...
	I0429 20:06:40.522368   65980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:06:40.540576   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:06:40.557130   65980 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:06:40.557203   65980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:06:40.573803   65980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:06:40.589730   65980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:06:40.731625   65980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:06:40.902594   65980 docker.go:233] disabling docker service ...
	I0429 20:06:40.902665   65980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:06:40.921454   65980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:06:40.938734   65980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:06:41.081822   65980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:06:41.237778   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:06:41.254086   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:06:41.276277   65980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:06:41.276362   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.288903   65980 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:06:41.288972   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.301347   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.313639   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.325885   65980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:06:41.338215   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.350839   65980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.372124   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.385505   65980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:06:41.397626   65980 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:06:41.397704   65980 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:06:41.413915   65980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:06:41.427068   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:41.575690   65980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:06:41.748047   65980 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:06:41.748132   65980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:06:41.753313   65980 start.go:562] Will wait 60s for crictl version
	I0429 20:06:41.753379   65980 ssh_runner.go:195] Run: which crictl
	I0429 20:06:41.757672   65980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:06:41.794045   65980 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:06:41.794150   65980 ssh_runner.go:195] Run: crio --version
	I0429 20:06:41.831177   65980 ssh_runner.go:195] Run: crio --version
	I0429 20:06:41.865125   65980 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:06:38.049006   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:40.050003   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:42.050213   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:41.866698   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:41.869477   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:41.869815   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:41.869848   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:41.870107   65980 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0429 20:06:41.874917   65980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:41.889196   65980 kubeadm.go:877] updating cluster {Name:embed-certs-161370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:06:41.889353   65980 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:06:41.889423   65980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:41.936285   65980 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:06:41.936352   65980 ssh_runner.go:195] Run: which lz4
	I0429 20:06:41.941893   65980 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:06:41.947071   65980 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:06:41.947112   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:06:40.399024   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:40.899056   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.399275   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.899285   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.399200   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.899243   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.398590   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.899346   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.143962   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:41.645981   66875 node_ready.go:49] node "default-k8s-diff-port-866143" has status "Ready":"True"
	I0429 20:06:41.646007   66875 node_ready.go:38] duration metric: took 7.007388661s for node "default-k8s-diff-port-866143" to be "Ready" ...
	I0429 20:06:41.646018   66875 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:41.652664   66875 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.657667   66875 pod_ready.go:92] pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.657685   66875 pod_ready.go:81] duration metric: took 4.993051ms for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.657694   66875 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.662632   66875 pod_ready.go:92] pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.662650   66875 pod_ready.go:81] duration metric: took 4.950519ms for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.662658   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.667488   66875 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.667509   66875 pod_ready.go:81] duration metric: took 4.844299ms for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.667520   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.672480   66875 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.672501   66875 pod_ready.go:81] duration metric: took 4.974639ms for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.672512   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:42.042828   66875 pod_ready.go:92] pod "kube-proxy-zddtx" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:42.042856   66875 pod_ready.go:81] duration metric: took 370.336555ms for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:42.042868   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.051930   66875 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:44.548970   66875 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:44.548999   66875 pod_ready.go:81] duration metric: took 2.506120519s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.549011   66875 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.051077   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:46.052233   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:43.759688   65980 crio.go:462] duration metric: took 1.817838869s to copy over tarball
	I0429 20:06:43.759784   65980 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:46.405802   65980 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.64598022s)
	I0429 20:06:46.405851   65980 crio.go:469] duration metric: took 2.646122331s to extract the tarball
	I0429 20:06:46.405861   65980 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:46.444700   65980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:46.503047   65980 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 20:06:46.503086   65980 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:06:46.503098   65980 kubeadm.go:928] updating node { 192.168.50.184 8443 v1.30.0 crio true true} ...
	I0429 20:06:46.503234   65980 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-161370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:46.503334   65980 ssh_runner.go:195] Run: crio config
	I0429 20:06:46.563489   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:06:46.563511   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:46.563523   65980 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:46.563542   65980 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-161370 NodeName:embed-certs-161370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:06:46.563662   65980 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-161370"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:46.563719   65980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:06:46.576288   65980 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:46.576350   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:46.586807   65980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0429 20:06:46.605883   65980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:46.626741   65980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0429 20:06:46.647223   65980 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:46.652262   65980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:46.667095   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:46.804937   65980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:46.831022   65980 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370 for IP: 192.168.50.184
	I0429 20:06:46.831048   65980 certs.go:194] generating shared ca certs ...
	I0429 20:06:46.831067   65980 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:46.831251   65980 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:46.831295   65980 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:46.831306   65980 certs.go:256] generating profile certs ...
	I0429 20:06:46.831385   65980 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/client.key
	I0429 20:06:46.831440   65980 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.key.9384fac7
	I0429 20:06:46.831476   65980 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.key
	I0429 20:06:46.831582   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:46.831610   65980 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:46.831617   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:46.831635   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:46.831662   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:46.831691   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:46.831729   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:46.832571   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:46.896363   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:46.939336   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:46.976256   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:47.007777   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0429 20:06:47.045019   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:47.079584   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:47.114002   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:06:47.142163   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:47.170063   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:47.199098   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:47.228985   65980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:47.250928   65980 ssh_runner.go:195] Run: openssl version
	I0429 20:06:47.258215   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:47.271653   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.277100   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.277183   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.283876   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:47.297519   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:47.311104   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.316347   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.316408   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.322992   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:47.337744   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:47.351332   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.356912   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.356964   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.363339   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:47.378501   65980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:47.383995   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:47.391157   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:47.398039   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:47.405117   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:47.412125   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:47.419257   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:47.425917   65980 kubeadm.go:391] StartCluster: {Name:embed-certs-161370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:47.426009   65980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:47.426049   65980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:47.469133   65980 cri.go:89] found id: ""
	I0429 20:06:47.469216   65980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:47.481852   65980 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:47.481878   65980 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:47.481883   65980 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:47.481926   65980 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:47.495254   65980 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:47.496760   65980 kubeconfig.go:125] found "embed-certs-161370" server: "https://192.168.50.184:8443"
	I0429 20:06:47.499898   65980 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:47.511866   65980 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.184
	I0429 20:06:47.511903   65980 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:47.511917   65980 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:47.511972   65980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:47.563879   65980 cri.go:89] found id: ""
	I0429 20:06:47.563956   65980 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:47.583490   65980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:47.595867   65980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:47.595893   65980 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:47.595947   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:47.608218   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:47.608283   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:47.620329   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:47.631394   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:47.631527   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:47.643107   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:47.654164   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:47.654233   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:47.665890   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:47.676817   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:47.676859   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:47.688608   65980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:47.700068   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:45.398908   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:45.898619   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.398795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.899058   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.399257   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.899269   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.398874   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.898653   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.399305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.898855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.556692   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:49.056546   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:48.550949   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:50.551905   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:47.821391   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.623284   65980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.31791052s)
	I0429 20:06:49.623343   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.870630   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.950525   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:50.061240   65980 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:50.061331   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.562165   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.062299   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.139853   65980 api_server.go:72] duration metric: took 1.078602354s to wait for apiserver process to appear ...
	I0429 20:06:51.139883   65980 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:06:51.139905   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:51.140472   65980 api_server.go:269] stopped: https://192.168.50.184:8443/healthz: Get "https://192.168.50.184:8443/healthz": dial tcp 192.168.50.184:8443: connect: connection refused
	I0429 20:06:51.640813   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:50.398577   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.899284   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.399361   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.899134   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.399211   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.898733   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.399280   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.898915   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.399264   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.898840   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.057650   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:53.559429   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:53.049570   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:55.049866   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:57.050558   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:54.540707   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:54.540765   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:54.540797   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:54.618982   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:54.619016   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:54.640333   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:54.674491   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:54.674542   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:55.140955   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:55.157479   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:55.157517   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:55.639999   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:55.646278   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:55.646311   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:56.140938   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:56.147336   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:56.147371   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:56.640927   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:56.647027   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:56.647054   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:57.140894   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:57.145193   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:57.145236   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:57.640842   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:57.645453   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:57.645478   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:58.140524   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:58.146317   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0429 20:06:58.153972   65980 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:58.154011   65980 api_server.go:131] duration metric: took 7.014120443s to wait for apiserver health ...
	I0429 20:06:58.154028   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:06:58.154036   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:58.155341   65980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:55.398622   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:55.898563   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.399306   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.898473   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.899278   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.399121   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.899291   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.399197   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.898901   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.056503   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:58.056988   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:59.053737   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:01.555480   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:58.156794   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:58.176870   65980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:58.215333   65980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:58.230619   65980 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:58.230655   65980 system_pods.go:61] "coredns-7db6d8ff4d-wjfff" [bd92e456-b538-49ae-984b-c6bcea6add30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:58.230667   65980 system_pods.go:61] "etcd-embed-certs-161370" [da2d022f-33c4-49b7-b997-a6783043f3e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:58.230675   65980 system_pods.go:61] "kube-apiserver-embed-certs-161370" [032913c9-bb91-46ba-ad4d-a4d5b63d806f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:58.230681   65980 system_pods.go:61] "kube-controller-manager-embed-certs-161370" [2f3ae1ff-0688-4c70-a888-5e1e640f64bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:58.230685   65980 system_pods.go:61] "kube-proxy-9kmg8" [01bbd2ca-24d2-4c7c-b4ea-79604ac3f486] Running
	I0429 20:06:58.230689   65980 system_pods.go:61] "kube-scheduler-embed-certs-161370" [c88ab7cc-1aef-48cb-814e-eff8e946885c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:58.230694   65980 system_pods.go:61] "metrics-server-569cc877fc-c4h7f" [bf1cae8d-cca1-4573-935f-e60118ca9575] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:58.230698   65980 system_pods.go:61] "storage-provisioner" [1686a084-f28b-4b26-b748-85a2a3613dde] Running
	I0429 20:06:58.230703   65980 system_pods.go:74] duration metric: took 15.348727ms to wait for pod list to return data ...
	I0429 20:06:58.230713   65980 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:58.233411   65980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:58.233436   65980 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:58.233447   65980 node_conditions.go:105] duration metric: took 2.729694ms to run NodePressure ...
	I0429 20:06:58.233466   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:58.532729   65980 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:58.538018   65980 kubeadm.go:733] kubelet initialised
	I0429 20:06:58.538038   65980 kubeadm.go:734] duration metric: took 5.283028ms waiting for restarted kubelet to initialise ...
	I0429 20:06:58.538046   65980 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:58.544267   65980 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:00.553501   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:00.398537   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.899359   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.399125   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.899428   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.399457   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.899355   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.399421   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.899376   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.399331   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.899263   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.555517   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:02.557429   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:05.056216   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:04.049941   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:06.051285   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:03.069330   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:03.554710   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.554732   65980 pod_ready.go:81] duration metric: took 5.010440873s for pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.554742   65980 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.562277   65980 pod_ready.go:92] pod "etcd-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.562298   65980 pod_ready.go:81] duration metric: took 7.550156ms for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.562306   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.567038   65980 pod_ready.go:92] pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.567060   65980 pod_ready.go:81] duration metric: took 4.748002ms for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.567069   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.573632   65980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.573664   65980 pod_ready.go:81] duration metric: took 1.006574407s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.573675   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9kmg8" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.578356   65980 pod_ready.go:92] pod "kube-proxy-9kmg8" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.578377   65980 pod_ready.go:81] duration metric: took 4.694437ms for pod "kube-proxy-9kmg8" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.578388   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.749703   65980 pod_ready.go:92] pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.749733   65980 pod_ready.go:81] duration metric: took 171.336391ms for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.749750   65980 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:06.757041   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:05.398458   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:05.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.399205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.399308   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.898749   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:08.399182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:08.399271   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:08.448015   66615 cri.go:89] found id: ""
	I0429 20:07:08.448041   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.448049   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:08.448055   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:08.448103   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:08.491239   66615 cri.go:89] found id: ""
	I0429 20:07:08.491265   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.491274   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:08.491280   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:08.491330   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:08.541203   66615 cri.go:89] found id: ""
	I0429 20:07:08.541226   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.541234   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:08.541239   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:08.541300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:08.584370   66615 cri.go:89] found id: ""
	I0429 20:07:08.584393   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.584401   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:08.584407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:08.584469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:08.625126   66615 cri.go:89] found id: ""
	I0429 20:07:08.625158   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.625169   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:08.625182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:08.625246   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:08.666987   66615 cri.go:89] found id: ""
	I0429 20:07:08.667018   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.667032   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:08.667039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:08.667105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:08.712363   66615 cri.go:89] found id: ""
	I0429 20:07:08.712394   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.712405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:08.712413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:08.712471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:08.762122   66615 cri.go:89] found id: ""
	I0429 20:07:08.762151   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.762170   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:08.762180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:08.762196   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:08.808218   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:08.808246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:08.867278   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:08.867317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:08.884230   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:08.884266   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:09.018183   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:09.018208   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:09.018224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:07.555443   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:09.557653   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:08.551823   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.051232   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:09.257687   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.758829   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.587112   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:11.603711   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:11.603781   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:11.651087   66615 cri.go:89] found id: ""
	I0429 20:07:11.651115   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.651123   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:11.651128   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:11.651192   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:11.691888   66615 cri.go:89] found id: ""
	I0429 20:07:11.691914   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.691921   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:11.691928   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:11.691976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:11.733411   66615 cri.go:89] found id: ""
	I0429 20:07:11.733441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.733452   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:11.733460   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:11.733517   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:11.774620   66615 cri.go:89] found id: ""
	I0429 20:07:11.774648   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.774659   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:11.774666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:11.774729   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:11.821410   66615 cri.go:89] found id: ""
	I0429 20:07:11.821441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.821449   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:11.821455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:11.821502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:11.864699   66615 cri.go:89] found id: ""
	I0429 20:07:11.864730   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.864741   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:11.864749   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:11.864809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:11.904637   66615 cri.go:89] found id: ""
	I0429 20:07:11.904678   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.904687   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:11.904693   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:11.904742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:11.970914   66615 cri.go:89] found id: ""
	I0429 20:07:11.970945   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.970957   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:11.970968   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:11.970984   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:12.024185   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:12.024226   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:12.040319   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:12.040349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:12.137888   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:12.137915   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:12.137941   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:12.210256   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:12.210290   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:14.758756   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:14.775321   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:14.775386   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:14.812637   66615 cri.go:89] found id: ""
	I0429 20:07:14.812662   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.812672   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:14.812679   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:14.812735   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:14.851503   66615 cri.go:89] found id: ""
	I0429 20:07:14.851536   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.851547   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:14.851554   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:14.851613   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:14.885708   66615 cri.go:89] found id: ""
	I0429 20:07:14.885739   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.885749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:14.885756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:14.885817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:14.926133   66615 cri.go:89] found id: ""
	I0429 20:07:14.926162   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.926173   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:14.926181   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:14.926240   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:12.056093   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.056500   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:13.549924   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:15.550544   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.257394   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:16.756833   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.967553   66615 cri.go:89] found id: ""
	I0429 20:07:14.967582   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.967593   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:14.967601   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:14.967659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:15.006174   66615 cri.go:89] found id: ""
	I0429 20:07:15.006199   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.006207   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:15.006218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:15.006293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:15.046916   66615 cri.go:89] found id: ""
	I0429 20:07:15.046940   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.046947   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:15.046953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:15.047009   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:15.089229   66615 cri.go:89] found id: ""
	I0429 20:07:15.089256   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.089266   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:15.089278   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:15.089298   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:15.143518   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:15.143561   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:15.162742   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:15.162769   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:15.242850   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:15.242872   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:15.242884   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:15.315783   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:15.315825   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:17.863336   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:17.877802   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:17.877869   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:17.935714   66615 cri.go:89] found id: ""
	I0429 20:07:17.935738   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.935746   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:17.935754   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:17.935810   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:17.988496   66615 cri.go:89] found id: ""
	I0429 20:07:17.988529   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.988540   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:17.988547   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:17.988610   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:18.030695   66615 cri.go:89] found id: ""
	I0429 20:07:18.030726   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.030737   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:18.030745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:18.030822   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:18.077452   66615 cri.go:89] found id: ""
	I0429 20:07:18.077481   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.077491   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:18.077498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:18.077561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:18.120102   66615 cri.go:89] found id: ""
	I0429 20:07:18.120127   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.120136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:18.120141   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:18.120200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:18.163440   66615 cri.go:89] found id: ""
	I0429 20:07:18.163469   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.163480   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:18.163487   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:18.163549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:18.202650   66615 cri.go:89] found id: ""
	I0429 20:07:18.202680   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.202693   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:18.202699   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:18.202760   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:18.244378   66615 cri.go:89] found id: ""
	I0429 20:07:18.244408   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.244418   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:18.244429   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:18.244446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:18.289246   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:18.289279   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:18.343382   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:18.343425   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:18.359070   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:18.359103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:18.440316   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:18.440337   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:18.440351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:16.555711   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.555851   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.051297   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:20.551594   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.756941   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:20.756974   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:22.757155   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:21.019552   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:21.036407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:21.036523   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:21.083148   66615 cri.go:89] found id: ""
	I0429 20:07:21.083171   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.083179   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:21.083184   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:21.083231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:21.129382   66615 cri.go:89] found id: ""
	I0429 20:07:21.129415   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.129426   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:21.129434   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:21.129502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:21.172978   66615 cri.go:89] found id: ""
	I0429 20:07:21.173007   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.173015   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:21.173020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:21.173068   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:21.218124   66615 cri.go:89] found id: ""
	I0429 20:07:21.218159   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.218171   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:21.218178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:21.218243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:21.260603   66615 cri.go:89] found id: ""
	I0429 20:07:21.260640   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.260651   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:21.260658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:21.260723   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:21.302351   66615 cri.go:89] found id: ""
	I0429 20:07:21.302386   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.302398   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:21.302407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:21.302498   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:21.347003   66615 cri.go:89] found id: ""
	I0429 20:07:21.347028   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.347037   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:21.347043   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:21.347098   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:21.388202   66615 cri.go:89] found id: ""
	I0429 20:07:21.388236   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.388245   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:21.388257   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:21.388272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:21.442706   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:21.442744   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:21.457453   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:21.457489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:21.539669   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:21.539695   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:21.539707   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:21.625210   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:21.625247   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.173256   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:24.189920   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:24.189990   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:24.236730   66615 cri.go:89] found id: ""
	I0429 20:07:24.236761   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.236772   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:24.236779   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:24.236843   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:24.279031   66615 cri.go:89] found id: ""
	I0429 20:07:24.279055   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.279062   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:24.279067   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:24.279112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:24.321622   66615 cri.go:89] found id: ""
	I0429 20:07:24.321647   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.321657   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:24.321665   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:24.321726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:24.360884   66615 cri.go:89] found id: ""
	I0429 20:07:24.360911   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.360919   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:24.360924   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:24.360983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:24.414439   66615 cri.go:89] found id: ""
	I0429 20:07:24.414463   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.414472   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:24.414477   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:24.414559   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:24.456994   66615 cri.go:89] found id: ""
	I0429 20:07:24.457023   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.457033   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:24.457041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:24.457107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:24.497991   66615 cri.go:89] found id: ""
	I0429 20:07:24.498026   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.498036   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:24.498044   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:24.498137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:24.539375   66615 cri.go:89] found id: ""
	I0429 20:07:24.539415   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.539426   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:24.539438   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:24.539453   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:24.661778   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:24.661804   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:24.661820   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:24.748180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:24.748215   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.795963   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:24.795999   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:24.851485   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:24.851524   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:20.556543   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:22.556775   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:24.559759   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:23.052715   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:25.550857   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.551209   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:25.256195   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.258199   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.367869   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:27.385633   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:27.385716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:27.423181   66615 cri.go:89] found id: ""
	I0429 20:07:27.423210   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.423222   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:27.423233   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:27.423293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:27.467385   66615 cri.go:89] found id: ""
	I0429 20:07:27.467419   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.467432   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:27.467439   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:27.467503   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:27.506171   66615 cri.go:89] found id: ""
	I0429 20:07:27.506204   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.506216   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:27.506223   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:27.506272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:27.545043   66615 cri.go:89] found id: ""
	I0429 20:07:27.545066   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.545074   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:27.545080   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:27.545136   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:27.592279   66615 cri.go:89] found id: ""
	I0429 20:07:27.592306   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.592314   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:27.592320   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:27.592379   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:27.628569   66615 cri.go:89] found id: ""
	I0429 20:07:27.628595   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.628604   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:27.628612   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:27.628659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:27.667937   66615 cri.go:89] found id: ""
	I0429 20:07:27.667967   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.667978   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:27.667985   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:27.668047   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:27.708813   66615 cri.go:89] found id: ""
	I0429 20:07:27.708844   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.708853   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:27.708861   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:27.708876   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:27.789589   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:27.789625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:27.837147   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:27.837180   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:27.891928   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:27.891956   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:27.906162   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:27.906188   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:27.983738   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:27.057372   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:29.555872   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:30.049373   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:32.052279   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:29.758675   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:32.257486   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:30.484404   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:30.503968   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:30.504041   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:30.553070   66615 cri.go:89] found id: ""
	I0429 20:07:30.553099   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.553111   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:30.553118   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:30.553180   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:30.609226   66615 cri.go:89] found id: ""
	I0429 20:07:30.609253   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.609262   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:30.609267   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:30.609324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:30.658359   66615 cri.go:89] found id: ""
	I0429 20:07:30.658384   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.658395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:30.658401   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:30.658459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:30.710024   66615 cri.go:89] found id: ""
	I0429 20:07:30.710048   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.710058   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:30.710114   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:30.710173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:30.752361   66615 cri.go:89] found id: ""
	I0429 20:07:30.752388   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.752398   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:30.752405   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:30.752469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:30.793311   66615 cri.go:89] found id: ""
	I0429 20:07:30.793333   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.793341   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:30.793347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:30.793394   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:30.832371   66615 cri.go:89] found id: ""
	I0429 20:07:30.832400   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.832411   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:30.832417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:30.832469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:30.871183   66615 cri.go:89] found id: ""
	I0429 20:07:30.871215   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.871226   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:30.871237   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:30.871253   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:30.929909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:30.929947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:30.944454   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:30.944482   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:31.022060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:31.022100   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:31.022116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:31.104142   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:31.104185   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:33.651167   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:33.667888   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:33.667948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:33.708455   66615 cri.go:89] found id: ""
	I0429 20:07:33.708484   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.708495   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:33.708502   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:33.708561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:33.747578   66615 cri.go:89] found id: ""
	I0429 20:07:33.747602   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.747611   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:33.747616   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:33.747661   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:33.796005   66615 cri.go:89] found id: ""
	I0429 20:07:33.796036   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.796056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:33.796064   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:33.796128   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:33.836238   66615 cri.go:89] found id: ""
	I0429 20:07:33.836263   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.836271   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:33.836276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:33.836324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:33.877010   66615 cri.go:89] found id: ""
	I0429 20:07:33.877043   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.877056   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:33.877065   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:33.877137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:33.919690   66615 cri.go:89] found id: ""
	I0429 20:07:33.919714   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.919722   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:33.919727   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:33.919797   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:33.959857   66615 cri.go:89] found id: ""
	I0429 20:07:33.959889   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.959900   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:33.959907   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:33.959989   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:33.996349   66615 cri.go:89] found id: ""
	I0429 20:07:33.996376   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.996386   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:33.996396   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:33.996433   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:34.010773   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:34.010808   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:34.091581   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:34.091599   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:34.091611   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:34.173266   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:34.173299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:34.221447   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:34.221479   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:32.055352   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.056364   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.550100   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.550663   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.756264   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.756583   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.776486   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:36.791630   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:36.791764   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:36.837475   66615 cri.go:89] found id: ""
	I0429 20:07:36.837503   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.837513   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:36.837521   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:36.837607   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.879902   66615 cri.go:89] found id: ""
	I0429 20:07:36.879936   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.879947   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:36.879954   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:36.880021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:36.918566   66615 cri.go:89] found id: ""
	I0429 20:07:36.918594   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.918608   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:36.918613   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:36.918659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:36.958876   66615 cri.go:89] found id: ""
	I0429 20:07:36.958937   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.958948   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:36.958959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:36.959008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:36.998790   66615 cri.go:89] found id: ""
	I0429 20:07:36.998820   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.998845   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:36.998864   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:36.998932   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:37.036933   66615 cri.go:89] found id: ""
	I0429 20:07:37.036962   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.036972   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:37.036979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:37.037024   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:37.076560   66615 cri.go:89] found id: ""
	I0429 20:07:37.076597   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.076609   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:37.076616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:37.076688   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:37.118324   66615 cri.go:89] found id: ""
	I0429 20:07:37.118351   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.118360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:37.118368   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:37.118380   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:37.194671   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:37.194714   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:37.236269   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:37.236300   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:37.297006   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:37.297061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:37.312696   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:37.312723   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:37.387132   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:39.888111   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:39.903157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:39.903236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:39.945913   66615 cri.go:89] found id: ""
	I0429 20:07:39.945945   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.945956   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:39.945980   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:39.946076   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.056553   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:38.057230   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:39.050274   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:41.053502   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:38.756717   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:40.762297   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:39.986494   66615 cri.go:89] found id: ""
	I0429 20:07:39.986521   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.986530   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:39.986538   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:39.986598   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:40.031481   66615 cri.go:89] found id: ""
	I0429 20:07:40.031520   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.031531   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:40.031539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:40.031604   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:40.076792   66615 cri.go:89] found id: ""
	I0429 20:07:40.076816   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.076824   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:40.076830   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:40.076877   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:40.121020   66615 cri.go:89] found id: ""
	I0429 20:07:40.121050   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.121061   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:40.121068   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:40.121134   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:40.173189   66615 cri.go:89] found id: ""
	I0429 20:07:40.173221   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.173233   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:40.173241   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:40.173303   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:40.220190   66615 cri.go:89] found id: ""
	I0429 20:07:40.220212   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.220223   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:40.220229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:40.220293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:40.262552   66615 cri.go:89] found id: ""
	I0429 20:07:40.262579   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.262588   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:40.262600   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:40.262616   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:40.322249   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:40.322289   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:40.338703   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:40.338734   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:40.431311   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:40.431333   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:40.431345   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:40.518410   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:40.518446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:43.062556   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:43.077757   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:43.077844   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:43.129247   66615 cri.go:89] found id: ""
	I0429 20:07:43.129277   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.129289   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:43.129296   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:43.129364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:43.173474   66615 cri.go:89] found id: ""
	I0429 20:07:43.173501   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.173509   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:43.173514   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:43.173566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:43.218788   66615 cri.go:89] found id: ""
	I0429 20:07:43.218812   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.218820   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:43.218825   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:43.218873   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:43.259269   66615 cri.go:89] found id: ""
	I0429 20:07:43.259289   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.259297   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:43.259302   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:43.259362   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:43.301152   66615 cri.go:89] found id: ""
	I0429 20:07:43.301180   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.301189   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:43.301195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:43.301244   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:43.338183   66615 cri.go:89] found id: ""
	I0429 20:07:43.338211   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.338222   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:43.338229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:43.338276   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:43.376919   66615 cri.go:89] found id: ""
	I0429 20:07:43.376946   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.376958   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:43.376966   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:43.377032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:43.417421   66615 cri.go:89] found id: ""
	I0429 20:07:43.417450   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.417457   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:43.417465   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:43.417478   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:43.470009   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:43.470040   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:43.486059   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:43.486109   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:43.561688   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:43.561709   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:43.561725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:43.649713   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:43.649750   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:40.555780   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.056758   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.552176   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:46.049393   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.256870   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:45.258520   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:47.757738   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:46.194996   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:46.210261   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:46.210342   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:46.249208   66615 cri.go:89] found id: ""
	I0429 20:07:46.249240   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.249253   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:46.249260   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:46.249336   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:46.287285   66615 cri.go:89] found id: ""
	I0429 20:07:46.287315   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.287328   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:46.287335   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:46.287397   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:46.327944   66615 cri.go:89] found id: ""
	I0429 20:07:46.327976   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.327988   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:46.327996   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:46.328061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:46.373875   66615 cri.go:89] found id: ""
	I0429 20:07:46.373899   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.373908   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:46.373914   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:46.373967   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:46.413748   66615 cri.go:89] found id: ""
	I0429 20:07:46.413774   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.413783   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:46.413789   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:46.413853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:46.459380   66615 cri.go:89] found id: ""
	I0429 20:07:46.459412   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.459424   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:46.459432   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:46.459496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:46.499833   66615 cri.go:89] found id: ""
	I0429 20:07:46.499861   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.499870   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:46.499876   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:46.499939   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:46.541025   66615 cri.go:89] found id: ""
	I0429 20:07:46.541055   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.541068   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:46.541080   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:46.541096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:46.601187   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:46.601224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:46.617399   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:46.617426   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:46.697076   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:46.697113   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:46.697129   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:46.783265   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:46.783303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.335795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:49.350030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:49.350116   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:49.390278   66615 cri.go:89] found id: ""
	I0429 20:07:49.390315   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.390326   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:49.390333   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:49.390388   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:49.431145   66615 cri.go:89] found id: ""
	I0429 20:07:49.431175   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.431186   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:49.431193   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:49.431252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:49.473965   66615 cri.go:89] found id: ""
	I0429 20:07:49.473997   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.474014   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:49.474022   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:49.474105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:49.515372   66615 cri.go:89] found id: ""
	I0429 20:07:49.515407   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.515419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:49.515427   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:49.515487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:49.552541   66615 cri.go:89] found id: ""
	I0429 20:07:49.552567   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.552576   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:49.552582   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:49.552650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:49.599628   66615 cri.go:89] found id: ""
	I0429 20:07:49.599660   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.599672   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:49.599680   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:49.599745   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:49.642705   66615 cri.go:89] found id: ""
	I0429 20:07:49.642741   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.642752   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:49.642759   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:49.642827   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:49.679864   66615 cri.go:89] found id: ""
	I0429 20:07:49.679888   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.679896   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:49.679905   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:49.679919   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:49.765967   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:49.765986   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:49.766010   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:49.852739   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:49.852779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.905586   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:49.905613   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:45.559781   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:48.059952   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:48.049788   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:50.548836   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.551059   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:50.256898   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.757213   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:49.959443   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:49.959474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.476677   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:52.491378   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:52.491458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:52.535801   66615 cri.go:89] found id: ""
	I0429 20:07:52.535827   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.535835   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:52.535841   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:52.535901   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:52.582895   66615 cri.go:89] found id: ""
	I0429 20:07:52.582932   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.582944   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:52.582952   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:52.583022   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:52.627070   66615 cri.go:89] found id: ""
	I0429 20:07:52.627096   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.627113   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:52.627120   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:52.627181   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:52.673312   66615 cri.go:89] found id: ""
	I0429 20:07:52.673339   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.673348   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:52.673353   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:52.673399   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:52.713099   66615 cri.go:89] found id: ""
	I0429 20:07:52.713124   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.713131   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:52.713139   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:52.713205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:52.761982   66615 cri.go:89] found id: ""
	I0429 20:07:52.762007   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.762017   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:52.762024   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:52.762108   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:52.801019   66615 cri.go:89] found id: ""
	I0429 20:07:52.801048   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.801059   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:52.801067   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:52.801141   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:52.842544   66615 cri.go:89] found id: ""
	I0429 20:07:52.842578   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.842602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:52.842613   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:52.842630   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:52.896409   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:52.896442   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.912625   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:52.912650   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:52.992231   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:52.992260   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:52.992276   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:53.077473   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:53.077507   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:50.555818   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.556860   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:54.557161   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:54.554094   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:57.049699   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:55.257406   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:57.257840   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:55.625557   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:55.640211   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:55.640284   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:55.683215   66615 cri.go:89] found id: ""
	I0429 20:07:55.683250   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.683259   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:55.683275   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:55.683341   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:55.730820   66615 cri.go:89] found id: ""
	I0429 20:07:55.730851   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.730862   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:55.730869   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:55.730928   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:55.771784   66615 cri.go:89] found id: ""
	I0429 20:07:55.771808   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.771816   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:55.771821   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:55.771866   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:55.814988   66615 cri.go:89] found id: ""
	I0429 20:07:55.815021   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.815034   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:55.815042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:55.815114   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:55.859293   66615 cri.go:89] found id: ""
	I0429 20:07:55.859327   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.859340   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:55.859349   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:55.859416   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:55.901802   66615 cri.go:89] found id: ""
	I0429 20:07:55.901833   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.901844   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:55.901852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:55.901921   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:55.943863   66615 cri.go:89] found id: ""
	I0429 20:07:55.943895   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.943905   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:55.943913   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:55.943977   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:55.986256   66615 cri.go:89] found id: ""
	I0429 20:07:55.986284   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.986296   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:55.986314   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:55.986332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:56.036710   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:56.036742   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:56.099909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:56.099945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:56.117630   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:56.117660   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:56.197396   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.197421   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:56.197436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:58.779065   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:58.794086   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:58.794168   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:58.844035   66615 cri.go:89] found id: ""
	I0429 20:07:58.844062   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.844070   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:58.844076   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:58.844133   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:58.887859   66615 cri.go:89] found id: ""
	I0429 20:07:58.887889   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.887900   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:58.887906   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:58.887991   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:58.929039   66615 cri.go:89] found id: ""
	I0429 20:07:58.929072   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.929083   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:58.929092   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:58.929152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:58.965930   66615 cri.go:89] found id: ""
	I0429 20:07:58.965975   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.965983   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:58.965989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:58.966061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:59.005583   66615 cri.go:89] found id: ""
	I0429 20:07:59.005616   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.005628   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:59.005638   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:59.005697   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:59.047964   66615 cri.go:89] found id: ""
	I0429 20:07:59.047994   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.048007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:59.048014   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:59.048077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:59.091851   66615 cri.go:89] found id: ""
	I0429 20:07:59.091891   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.091904   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:59.091909   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:59.091978   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:59.134843   66615 cri.go:89] found id: ""
	I0429 20:07:59.134874   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.134881   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:59.134890   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:59.134907   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:59.219048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:59.219084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:59.267404   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:59.267436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:59.322264   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:59.322303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:59.339196   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:59.339235   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:59.441904   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.558660   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.057214   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.054473   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.550825   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.756683   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.759031   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.942998   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:01.957442   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:01.957502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:02.002240   66615 cri.go:89] found id: ""
	I0429 20:08:02.002271   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.002283   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:02.002291   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:02.002353   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:02.051506   66615 cri.go:89] found id: ""
	I0429 20:08:02.051535   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.051546   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:02.051552   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:02.051611   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:02.093194   66615 cri.go:89] found id: ""
	I0429 20:08:02.093234   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.093247   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:02.093254   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:02.093317   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:02.134988   66615 cri.go:89] found id: ""
	I0429 20:08:02.135016   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.135027   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:02.135034   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:02.135099   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:02.182954   66615 cri.go:89] found id: ""
	I0429 20:08:02.182982   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.182993   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:02.183000   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:02.183063   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:02.227778   66615 cri.go:89] found id: ""
	I0429 20:08:02.227807   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.227817   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:02.227826   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:02.227888   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:02.265593   66615 cri.go:89] found id: ""
	I0429 20:08:02.265624   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.265634   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:02.265641   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:02.265701   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:02.306520   66615 cri.go:89] found id: ""
	I0429 20:08:02.306550   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.306558   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:02.306566   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:02.306578   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:02.323806   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:02.323844   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:02.407110   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:02.407140   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:02.407153   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:02.493755   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:02.493791   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:02.538610   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:02.538640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:01.556084   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:03.556487   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:03.551788   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:05.553047   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:04.257831   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:06.756438   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:05.096630   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:05.111112   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:05.111173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:05.151237   66615 cri.go:89] found id: ""
	I0429 20:08:05.151268   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.151279   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:05.151286   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:05.151370   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:05.205344   66615 cri.go:89] found id: ""
	I0429 20:08:05.205379   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.205389   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:05.205396   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:05.205478   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:05.244394   66615 cri.go:89] found id: ""
	I0429 20:08:05.244426   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.244438   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:05.244445   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:05.244504   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:05.285320   66615 cri.go:89] found id: ""
	I0429 20:08:05.285343   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.285350   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:05.285356   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:05.285404   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:05.327618   66615 cri.go:89] found id: ""
	I0429 20:08:05.327645   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.327657   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:05.327664   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:05.327742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:05.369152   66615 cri.go:89] found id: ""
	I0429 20:08:05.369178   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.369194   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:05.369208   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:05.369277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:05.407206   66615 cri.go:89] found id: ""
	I0429 20:08:05.407234   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.407243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:05.407248   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:05.407299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:05.447404   66615 cri.go:89] found id: ""
	I0429 20:08:05.447438   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.447449   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:05.447459   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:05.447475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:05.529660   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:05.529700   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.582510   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:05.582565   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:05.639300   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:05.639351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:05.656825   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:05.656860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:05.730863   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.231635   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:08.247722   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:08.247811   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:08.298354   66615 cri.go:89] found id: ""
	I0429 20:08:08.298382   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.298395   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:08.298401   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:08.298459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:08.339497   66615 cri.go:89] found id: ""
	I0429 20:08:08.339536   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.339549   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:08.339556   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:08.339609   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:08.379665   66615 cri.go:89] found id: ""
	I0429 20:08:08.379695   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.379705   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:08.379712   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:08.379786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:08.419698   66615 cri.go:89] found id: ""
	I0429 20:08:08.419722   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.419732   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:08.419739   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:08.419798   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:08.463901   66615 cri.go:89] found id: ""
	I0429 20:08:08.463935   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.463946   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:08.463953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:08.464028   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:08.504568   66615 cri.go:89] found id: ""
	I0429 20:08:08.504603   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.504617   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:08.504626   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:08.504695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:08.545634   66615 cri.go:89] found id: ""
	I0429 20:08:08.545661   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.545671   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:08.545678   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:08.545741   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:08.586936   66615 cri.go:89] found id: ""
	I0429 20:08:08.586965   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.586976   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:08.586987   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:08.587003   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:08.641755   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:08.641794   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:08.659798   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:08.659845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:08.744265   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.744288   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:08.744303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:08.823813   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:08.823860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.557172   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:07.558538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:10.057841   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:08.049902   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:10.050576   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:12.051331   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:08.757300   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:11.257697   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:11.375600   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:11.396286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:11.396351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:11.442737   66615 cri.go:89] found id: ""
	I0429 20:08:11.442781   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.442789   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:11.442797   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:11.442865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:11.484131   66615 cri.go:89] found id: ""
	I0429 20:08:11.484158   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.484167   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:11.484172   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:11.484231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:11.526647   66615 cri.go:89] found id: ""
	I0429 20:08:11.526684   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.526695   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:11.526705   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:11.526777   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:11.572001   66615 cri.go:89] found id: ""
	I0429 20:08:11.572028   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.572036   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:11.572042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:11.572100   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:11.618980   66615 cri.go:89] found id: ""
	I0429 20:08:11.619003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.619011   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:11.619016   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:11.619077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:11.667079   66615 cri.go:89] found id: ""
	I0429 20:08:11.667107   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.667115   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:11.667123   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:11.667198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:11.707967   66615 cri.go:89] found id: ""
	I0429 20:08:11.708003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.708013   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:11.708020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:11.708073   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:11.753024   66615 cri.go:89] found id: ""
	I0429 20:08:11.753053   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.753062   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:11.753070   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:11.753081   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:11.820171   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:11.820210   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:11.852234   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:11.852263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:11.971060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:11.971085   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:11.971097   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:12.049797   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:12.049845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:14.601181   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:14.621413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:14.621496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:14.677453   66615 cri.go:89] found id: ""
	I0429 20:08:14.677486   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.677498   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:14.677504   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:14.677562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:14.720517   66615 cri.go:89] found id: ""
	I0429 20:08:14.720548   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.720560   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:14.720571   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:14.720636   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:14.770186   66615 cri.go:89] found id: ""
	I0429 20:08:14.770211   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.770219   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:14.770225   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:14.770301   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:14.815286   66615 cri.go:89] found id: ""
	I0429 20:08:14.815310   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.815320   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:14.815327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:14.815389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:14.862625   66615 cri.go:89] found id: ""
	I0429 20:08:14.862651   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.862662   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:14.862669   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:14.862726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:14.910517   66615 cri.go:89] found id: ""
	I0429 20:08:14.910554   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.910565   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:14.910572   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:14.910634   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:14.951085   66615 cri.go:89] found id: ""
	I0429 20:08:14.951110   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.951119   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:14.951124   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:14.951173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:12.558191   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:15.056987   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:14.051423   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:16.051632   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:13.757001   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:16.257425   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:14.991414   66615 cri.go:89] found id: ""
	I0429 20:08:14.991443   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.991455   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:14.991464   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:14.991476   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:15.047551   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:15.047583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:15.063667   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:15.063692   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:15.141744   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:15.141820   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:15.141841   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:15.225676   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:15.225722   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:17.774459   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:17.793137   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:17.793210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:17.856725   66615 cri.go:89] found id: ""
	I0429 20:08:17.856756   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.856767   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:17.856774   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:17.856835   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:17.916510   66615 cri.go:89] found id: ""
	I0429 20:08:17.916542   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.916554   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:17.916561   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:17.916646   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:17.970835   66615 cri.go:89] found id: ""
	I0429 20:08:17.970867   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.970877   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:17.970884   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:17.970948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:18.013324   66615 cri.go:89] found id: ""
	I0429 20:08:18.013353   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.013366   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:18.013384   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:18.013458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:18.062930   66615 cri.go:89] found id: ""
	I0429 20:08:18.062957   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.062968   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:18.062974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:18.063040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:18.111792   66615 cri.go:89] found id: ""
	I0429 20:08:18.111820   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.111829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:18.111834   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:18.111911   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:18.160096   66615 cri.go:89] found id: ""
	I0429 20:08:18.160121   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.160129   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:18.160135   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:18.160198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:18.204012   66615 cri.go:89] found id: ""
	I0429 20:08:18.204044   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.204052   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:18.204062   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:18.204074   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:18.284288   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:18.284337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:18.340746   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:18.340779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:18.397612   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:18.397652   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:18.413425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:18.413455   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:18.493598   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:17.058215   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:19.556308   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:18.551175   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:20.551292   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:22.551637   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:18.757370   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:21.259192   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:20.994339   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:21.010199   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:21.010289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:21.052190   66615 cri.go:89] found id: ""
	I0429 20:08:21.052219   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.052230   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:21.052237   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:21.052300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:21.090838   66615 cri.go:89] found id: ""
	I0429 20:08:21.090870   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.090882   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:21.090889   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:21.090953   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:21.137997   66615 cri.go:89] found id: ""
	I0429 20:08:21.138044   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.138056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:21.138082   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:21.138171   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:21.176278   66615 cri.go:89] found id: ""
	I0429 20:08:21.176311   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.176323   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:21.176331   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:21.176390   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:21.213925   66615 cri.go:89] found id: ""
	I0429 20:08:21.213955   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.213966   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:21.213973   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:21.214039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:21.253815   66615 cri.go:89] found id: ""
	I0429 20:08:21.253842   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.253850   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:21.253857   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:21.253905   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:21.296521   66615 cri.go:89] found id: ""
	I0429 20:08:21.296553   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.296565   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:21.296573   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:21.296633   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:21.337114   66615 cri.go:89] found id: ""
	I0429 20:08:21.337143   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.337150   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:21.337158   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:21.337177   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.384860   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:21.384901   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:21.443837   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:21.443899   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:21.460084   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:21.460116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:21.541230   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:21.541262   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:21.541278   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.132057   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:24.148381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:24.148458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:24.192469   66615 cri.go:89] found id: ""
	I0429 20:08:24.192499   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.192510   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:24.192516   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:24.192568   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:24.232150   66615 cri.go:89] found id: ""
	I0429 20:08:24.232177   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.232188   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:24.232195   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:24.232260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:24.272679   66615 cri.go:89] found id: ""
	I0429 20:08:24.272705   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.272714   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:24.272719   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:24.272772   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:24.317114   66615 cri.go:89] found id: ""
	I0429 20:08:24.317137   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.317145   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:24.317151   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:24.317200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:24.362251   66615 cri.go:89] found id: ""
	I0429 20:08:24.362279   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.362287   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:24.362294   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:24.362346   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:24.405696   66615 cri.go:89] found id: ""
	I0429 20:08:24.405721   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.405729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:24.405734   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:24.405828   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:24.446837   66615 cri.go:89] found id: ""
	I0429 20:08:24.446864   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.446871   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:24.446878   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:24.446929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:24.493416   66615 cri.go:89] found id: ""
	I0429 20:08:24.493445   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.493454   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:24.493462   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:24.493475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:24.555657   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:24.555693   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:24.572297   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:24.572328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:24.658463   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:24.658487   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:24.658499   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.752064   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:24.752103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.557948   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:24.056339   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:25.050530   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:27.554744   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:23.758156   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:26.261403   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:27.303812   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:27.319304   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:27.319373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:27.360473   66615 cri.go:89] found id: ""
	I0429 20:08:27.360509   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.360521   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:27.360529   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:27.360595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:27.404619   66615 cri.go:89] found id: ""
	I0429 20:08:27.404651   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.404668   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:27.404675   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:27.404742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:27.447464   66615 cri.go:89] found id: ""
	I0429 20:08:27.447490   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.447498   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:27.447503   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:27.447556   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:27.489197   66615 cri.go:89] found id: ""
	I0429 20:08:27.489235   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.489246   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:27.489253   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:27.489323   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:27.534354   66615 cri.go:89] found id: ""
	I0429 20:08:27.534387   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.534397   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:27.534404   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:27.534470   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:27.580721   66615 cri.go:89] found id: ""
	I0429 20:08:27.580751   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.580762   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:27.580769   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:27.580841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:27.620000   66615 cri.go:89] found id: ""
	I0429 20:08:27.620033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.620041   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:27.620046   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:27.620096   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:27.659000   66615 cri.go:89] found id: ""
	I0429 20:08:27.659033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.659041   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:27.659050   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:27.659062   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:27.739202   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:27.739241   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:27.784761   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:27.784807   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:27.842707   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:27.842748   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:27.859471   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:27.859498   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:27.942686   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:26.058098   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:28.059648   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.056692   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:32.550893   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:28.757412   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.759070   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.443410   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:30.460332   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:30.460417   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:30.497715   66615 cri.go:89] found id: ""
	I0429 20:08:30.497752   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.497764   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:30.497772   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:30.497841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:30.539376   66615 cri.go:89] found id: ""
	I0429 20:08:30.539409   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.539419   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:30.539426   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:30.539492   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:30.587567   66615 cri.go:89] found id: ""
	I0429 20:08:30.587596   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.587606   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:30.587616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:30.587679   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:30.626198   66615 cri.go:89] found id: ""
	I0429 20:08:30.626228   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.626238   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:30.626246   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:30.626313   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:30.665798   66615 cri.go:89] found id: ""
	I0429 20:08:30.665829   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.665837   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:30.665843   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:30.665909   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:30.708627   66615 cri.go:89] found id: ""
	I0429 20:08:30.708659   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.708671   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:30.708679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:30.708762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:30.754190   66615 cri.go:89] found id: ""
	I0429 20:08:30.754220   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.754230   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:30.754236   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:30.754295   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:30.797383   66615 cri.go:89] found id: ""
	I0429 20:08:30.797410   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.797421   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:30.797432   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:30.797447   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.843485   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:30.843512   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:30.900081   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:30.900118   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:30.916095   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:30.916125   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:30.995509   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:30.995529   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:30.995541   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:33.584596   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:33.600969   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:33.601058   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:33.643935   66615 cri.go:89] found id: ""
	I0429 20:08:33.643967   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.643979   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:33.643986   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:33.644049   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:33.681047   66615 cri.go:89] found id: ""
	I0429 20:08:33.681077   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.681085   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:33.681091   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:33.681160   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:33.726450   66615 cri.go:89] found id: ""
	I0429 20:08:33.726479   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.726490   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:33.726501   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:33.726561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:33.765237   66615 cri.go:89] found id: ""
	I0429 20:08:33.765264   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.765275   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:33.765281   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:33.765339   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:33.808333   66615 cri.go:89] found id: ""
	I0429 20:08:33.808366   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.808376   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:33.808383   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:33.808446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:33.854991   66615 cri.go:89] found id: ""
	I0429 20:08:33.855023   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.855034   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:33.855041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:33.855126   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:33.895405   66615 cri.go:89] found id: ""
	I0429 20:08:33.895434   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.895446   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:33.895455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:33.895521   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:33.937265   66615 cri.go:89] found id: ""
	I0429 20:08:33.937289   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.937297   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:33.937306   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:33.937324   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:33.991565   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:33.991594   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:34.006316   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:34.006343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:34.088734   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:34.088762   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:34.088776   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:34.180451   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:34.180489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.557020   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:33.058354   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:35.049638   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:37.051464   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:33.256955   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:35.257122   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:37.257629   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:36.727080   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:36.743038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:36.743124   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:36.785441   66615 cri.go:89] found id: ""
	I0429 20:08:36.785465   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.785475   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:36.785482   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:36.785542   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:36.828787   66615 cri.go:89] found id: ""
	I0429 20:08:36.828819   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.828829   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:36.828836   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:36.828896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:36.867712   66615 cri.go:89] found id: ""
	I0429 20:08:36.867738   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.867749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:36.867756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:36.867825   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:36.911435   66615 cri.go:89] found id: ""
	I0429 20:08:36.911462   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.911472   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:36.911478   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:36.911560   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:36.953803   66615 cri.go:89] found id: ""
	I0429 20:08:36.953828   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.953836   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:36.953842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:36.953903   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:36.990305   66615 cri.go:89] found id: ""
	I0429 20:08:36.990329   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.990339   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:36.990347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:36.990434   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:37.029177   66615 cri.go:89] found id: ""
	I0429 20:08:37.029206   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.029225   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:37.029232   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:37.029294   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:37.067583   66615 cri.go:89] found id: ""
	I0429 20:08:37.067605   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.067612   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:37.067619   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:37.067631   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:37.144739   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:37.144776   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:37.144788   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:37.227724   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:37.227762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:37.270383   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:37.270417   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:37.326858   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:37.326890   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:39.843323   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:39.859899   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:39.859961   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:39.903125   66615 cri.go:89] found id: ""
	I0429 20:08:39.903155   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.903164   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:39.903169   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:39.903243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:39.944271   66615 cri.go:89] found id: ""
	I0429 20:08:39.944300   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.944309   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:39.944314   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:39.944363   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:35.557115   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:38.056175   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.550339   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:42.048622   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.756355   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:42.255528   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.989934   66615 cri.go:89] found id: ""
	I0429 20:08:39.989964   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.989972   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:39.989978   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:39.990032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:40.025936   66615 cri.go:89] found id: ""
	I0429 20:08:40.025965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.025976   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:40.025983   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:40.026044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:40.065943   66615 cri.go:89] found id: ""
	I0429 20:08:40.065965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.065976   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:40.065984   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:40.066038   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:40.109986   66615 cri.go:89] found id: ""
	I0429 20:08:40.110018   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.110030   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:40.110038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:40.110115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:40.155610   66615 cri.go:89] found id: ""
	I0429 20:08:40.155716   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.155734   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:40.155745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:40.155803   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:40.196213   66615 cri.go:89] found id: ""
	I0429 20:08:40.196239   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.196246   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:40.196256   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:40.196272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:40.280330   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:40.280372   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:40.326774   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:40.326810   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:40.379438   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:40.379475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:40.395332   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:40.395362   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:40.504413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:43.005046   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:43.020464   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:43.020544   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:43.066403   66615 cri.go:89] found id: ""
	I0429 20:08:43.066432   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.066444   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:43.066452   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:43.066548   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:43.109732   66615 cri.go:89] found id: ""
	I0429 20:08:43.109760   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.109771   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:43.109778   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:43.109850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:43.158457   66615 cri.go:89] found id: ""
	I0429 20:08:43.158483   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.158492   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:43.158498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:43.158561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:43.207170   66615 cri.go:89] found id: ""
	I0429 20:08:43.207201   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.207213   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:43.207221   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:43.207281   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:43.246746   66615 cri.go:89] found id: ""
	I0429 20:08:43.246783   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.246804   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:43.246811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:43.246875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:43.292786   66615 cri.go:89] found id: ""
	I0429 20:08:43.292813   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.292824   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:43.292831   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:43.292896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:43.337509   66615 cri.go:89] found id: ""
	I0429 20:08:43.337537   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.337546   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:43.337551   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:43.337601   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:43.378446   66615 cri.go:89] found id: ""
	I0429 20:08:43.378473   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.378481   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:43.378490   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:43.378502   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:43.460438   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:43.460474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:43.503908   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:43.503945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:43.561661   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:43.561699   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:43.577924   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:43.577954   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:43.667006   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:40.555875   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:43.057183   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:44.049342   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.049873   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:44.256458   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.256554   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.168175   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:46.212494   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:46.212579   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:46.251567   66615 cri.go:89] found id: ""
	I0429 20:08:46.251593   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.251603   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:46.251610   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:46.251673   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:46.291913   66615 cri.go:89] found id: ""
	I0429 20:08:46.291943   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.291955   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:46.291962   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:46.292023   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:46.331801   66615 cri.go:89] found id: ""
	I0429 20:08:46.331827   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.331836   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:46.331842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:46.331899   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:46.375956   66615 cri.go:89] found id: ""
	I0429 20:08:46.375989   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.376001   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:46.376008   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:46.376090   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:46.425572   66615 cri.go:89] found id: ""
	I0429 20:08:46.425599   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.425609   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:46.425618   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:46.425681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:46.468161   66615 cri.go:89] found id: ""
	I0429 20:08:46.468226   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.468249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:46.468263   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:46.468433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:46.512163   66615 cri.go:89] found id: ""
	I0429 20:08:46.512193   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.512205   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:46.512212   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:46.512277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:46.556047   66615 cri.go:89] found id: ""
	I0429 20:08:46.556078   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.556088   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:46.556099   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:46.556111   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:46.609886   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:46.609921   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:46.625848   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:46.625878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:46.699005   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:46.699037   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:46.699053   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:46.783886   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:46.783923   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.331288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:49.344805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:49.344864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:49.381576   66615 cri.go:89] found id: ""
	I0429 20:08:49.381598   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.381605   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:49.381619   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:49.381667   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:49.418276   66615 cri.go:89] found id: ""
	I0429 20:08:49.418316   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.418329   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:49.418336   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:49.418389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:49.460147   66615 cri.go:89] found id: ""
	I0429 20:08:49.460177   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.460188   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:49.460195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:49.460253   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:49.500534   66615 cri.go:89] found id: ""
	I0429 20:08:49.500562   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.500569   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:49.500575   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:49.500632   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:49.538481   66615 cri.go:89] found id: ""
	I0429 20:08:49.538521   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.538534   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:49.538541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:49.538603   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:49.580192   66615 cri.go:89] found id: ""
	I0429 20:08:49.580218   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.580228   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:49.580234   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:49.580299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:49.616400   66615 cri.go:89] found id: ""
	I0429 20:08:49.616427   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.616437   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:49.616444   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:49.616551   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:49.652871   66615 cri.go:89] found id: ""
	I0429 20:08:49.652900   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.652918   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:49.652931   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:49.652947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:49.728173   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:49.728200   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:49.728212   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:49.813701   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:49.813749   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.855685   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:49.855712   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:49.906480   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:49.906514   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:45.559939   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.056008   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.056054   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.052578   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.550638   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.550910   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.257460   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.259418   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.757365   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.422430   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:52.437412   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:52.437488   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:52.476896   66615 cri.go:89] found id: ""
	I0429 20:08:52.476919   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.476927   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:52.476932   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:52.476976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:52.517266   66615 cri.go:89] found id: ""
	I0429 20:08:52.517298   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.517310   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:52.517318   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:52.517381   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:52.560886   66615 cri.go:89] found id: ""
	I0429 20:08:52.560909   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.560917   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:52.560922   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:52.560969   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:52.601362   66615 cri.go:89] found id: ""
	I0429 20:08:52.601398   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.601419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:52.601429   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:52.601506   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:52.639544   66615 cri.go:89] found id: ""
	I0429 20:08:52.639580   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.639591   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:52.639599   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:52.639652   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:52.681088   66615 cri.go:89] found id: ""
	I0429 20:08:52.681120   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.681130   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:52.681138   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:52.681204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:52.721777   66615 cri.go:89] found id: ""
	I0429 20:08:52.721802   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.721820   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:52.721828   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:52.721900   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:52.762823   66615 cri.go:89] found id: ""
	I0429 20:08:52.762845   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.762856   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:52.762863   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:52.762875   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:52.819291   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:52.819326   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:52.847120   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:52.847165   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:52.956274   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:52.956301   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:52.956317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:53.041636   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:53.041676   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:52.056558   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:54.555745   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.051656   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:57.549668   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.257083   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:57.757855   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.592636   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:55.607372   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:55.607449   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:55.643959   66615 cri.go:89] found id: ""
	I0429 20:08:55.643991   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.644000   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:55.644005   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:55.644061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:55.682272   66615 cri.go:89] found id: ""
	I0429 20:08:55.682304   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.682315   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:55.682323   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:55.682384   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:55.720157   66615 cri.go:89] found id: ""
	I0429 20:08:55.720189   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.720200   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:55.720207   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:55.720272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:55.761748   66615 cri.go:89] found id: ""
	I0429 20:08:55.761773   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.761781   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:55.761786   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:55.761842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:55.802377   66615 cri.go:89] found id: ""
	I0429 20:08:55.802405   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.802416   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:55.802423   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:55.802494   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:55.838986   66615 cri.go:89] found id: ""
	I0429 20:08:55.839016   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.839024   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:55.839030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:55.839077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:55.874991   66615 cri.go:89] found id: ""
	I0429 20:08:55.875022   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.875032   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:55.875039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:55.875106   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:55.913561   66615 cri.go:89] found id: ""
	I0429 20:08:55.913595   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.913607   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:55.913618   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:55.913633   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:55.965355   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:55.965391   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:55.981222   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:55.981259   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:56.056656   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:56.056685   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:56.056701   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.135276   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:56.135309   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:58.682855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:58.701679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:58.701769   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:58.760807   66615 cri.go:89] found id: ""
	I0429 20:08:58.760828   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.760841   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:58.760858   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:58.760910   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:58.835167   66615 cri.go:89] found id: ""
	I0429 20:08:58.835204   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.835216   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:58.835223   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:58.835289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:58.877367   66615 cri.go:89] found id: ""
	I0429 20:08:58.877398   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.877409   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:58.877417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:58.877483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:58.923726   66615 cri.go:89] found id: ""
	I0429 20:08:58.923751   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.923760   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:58.923766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:58.923817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:58.967780   66615 cri.go:89] found id: ""
	I0429 20:08:58.967804   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.967811   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:58.967816   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:58.967865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:59.010646   66615 cri.go:89] found id: ""
	I0429 20:08:59.010682   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.010690   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:59.010697   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:59.010759   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:59.057380   66615 cri.go:89] found id: ""
	I0429 20:08:59.057408   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.057418   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:59.057426   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:59.057483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:59.099669   66615 cri.go:89] found id: ""
	I0429 20:08:59.099698   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.099706   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:59.099715   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:59.099731   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:59.146831   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:59.146861   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:59.204232   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:59.204274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:59.219799   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:59.219824   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:59.305438   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:59.305465   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:59.305481   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.555976   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:58.557892   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:00.049511   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:02.050709   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:00.256064   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:02.257053   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:01.885861   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:01.900746   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:01.900808   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:01.942174   66615 cri.go:89] found id: ""
	I0429 20:09:01.942210   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.942218   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:01.942224   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:01.942285   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:01.986463   66615 cri.go:89] found id: ""
	I0429 20:09:01.986491   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.986502   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:01.986509   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:01.986570   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:02.026290   66615 cri.go:89] found id: ""
	I0429 20:09:02.026314   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.026321   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:02.026327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:02.026375   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:02.064239   66615 cri.go:89] found id: ""
	I0429 20:09:02.064259   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.064266   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:02.064271   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:02.064321   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:02.105807   66615 cri.go:89] found id: ""
	I0429 20:09:02.105838   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.105857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:02.105866   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:02.105926   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:02.144939   66615 cri.go:89] found id: ""
	I0429 20:09:02.144962   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.144970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:02.144975   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:02.145037   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:02.192866   66615 cri.go:89] found id: ""
	I0429 20:09:02.192891   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.192899   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:02.192905   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:02.192955   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:02.232485   66615 cri.go:89] found id: ""
	I0429 20:09:02.232515   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.232524   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:02.232533   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:02.232550   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:02.287374   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:02.287402   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:02.302979   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:02.303009   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:02.380693   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:02.380713   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:02.380725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:02.467048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:02.467084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:01.055311   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:03.055538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:05.056325   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:04.051014   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:06.556497   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:04.758329   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:07.256328   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:05.018176   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:05.033178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:05.033238   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:05.079008   66615 cri.go:89] found id: ""
	I0429 20:09:05.079034   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.079043   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:05.079050   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:05.079113   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:05.118620   66615 cri.go:89] found id: ""
	I0429 20:09:05.118642   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.118650   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:05.118655   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:05.118714   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:05.159603   66615 cri.go:89] found id: ""
	I0429 20:09:05.159646   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.159660   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:05.159666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:05.159733   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:05.200224   66615 cri.go:89] found id: ""
	I0429 20:09:05.200252   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.200262   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:05.200270   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:05.200344   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:05.246341   66615 cri.go:89] found id: ""
	I0429 20:09:05.246384   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.246396   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:05.246403   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:05.246471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:05.286126   66615 cri.go:89] found id: ""
	I0429 20:09:05.286153   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.286163   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:05.286171   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:05.286235   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:05.326911   66615 cri.go:89] found id: ""
	I0429 20:09:05.326941   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.326952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:05.326958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:05.327019   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:05.365564   66615 cri.go:89] found id: ""
	I0429 20:09:05.365592   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.365602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:05.365621   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:05.365637   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:05.445857   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:05.445877   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:05.445889   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:05.530129   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:05.530164   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:05.573936   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:05.573971   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:05.631263   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:05.631299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.147288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:08.162949   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:08.163021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:08.203009   66615 cri.go:89] found id: ""
	I0429 20:09:08.203033   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.203041   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:08.203047   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:08.203112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:08.241708   66615 cri.go:89] found id: ""
	I0429 20:09:08.241735   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.241744   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:08.241750   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:08.241801   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:08.283976   66615 cri.go:89] found id: ""
	I0429 20:09:08.284005   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.284017   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:08.284023   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:08.284091   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:08.323909   66615 cri.go:89] found id: ""
	I0429 20:09:08.323939   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.323951   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:08.323962   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:08.324031   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:08.363236   66615 cri.go:89] found id: ""
	I0429 20:09:08.363263   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.363271   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:08.363276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:08.363328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:08.401767   66615 cri.go:89] found id: ""
	I0429 20:09:08.401790   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.401798   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:08.401803   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:08.401851   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:08.443678   66615 cri.go:89] found id: ""
	I0429 20:09:08.443709   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.443726   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:08.443731   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:08.443791   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:08.489025   66615 cri.go:89] found id: ""
	I0429 20:09:08.489069   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.489103   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:08.489129   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:08.489163   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:08.543421   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:08.543462   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.560425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:08.560459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:08.642819   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:08.642840   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:08.642855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:08.726644   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:08.726682   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:07.555523   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.556138   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.049664   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.050246   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.256452   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.257458   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.277817   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:11.292340   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:11.292420   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:11.330721   66615 cri.go:89] found id: ""
	I0429 20:09:11.330756   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.330768   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:11.330776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:11.330850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:11.372057   66615 cri.go:89] found id: ""
	I0429 20:09:11.372089   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.372098   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:11.372103   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:11.372155   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:11.414786   66615 cri.go:89] found id: ""
	I0429 20:09:11.414814   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.414825   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:11.414832   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:11.414898   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:11.454934   66615 cri.go:89] found id: ""
	I0429 20:09:11.454961   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.454969   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:11.454974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:11.455039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:11.494169   66615 cri.go:89] found id: ""
	I0429 20:09:11.494200   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.494211   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:11.494217   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:11.494277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:11.541646   66615 cri.go:89] found id: ""
	I0429 20:09:11.541684   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.541694   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:11.541701   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:11.541766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:11.584025   66615 cri.go:89] found id: ""
	I0429 20:09:11.584055   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.584067   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:11.584075   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:11.584138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:11.622425   66615 cri.go:89] found id: ""
	I0429 20:09:11.622459   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.622471   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:11.622481   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:11.622493   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:11.676416   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:11.676450   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:11.693793   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:11.693822   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:11.771410   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:11.771437   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:11.771454   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:11.854969   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:11.855047   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:14.398871   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:14.415894   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:14.415983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:14.454718   66615 cri.go:89] found id: ""
	I0429 20:09:14.454752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.454763   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:14.454773   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:14.454836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:14.498562   66615 cri.go:89] found id: ""
	I0429 20:09:14.498591   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.498602   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:14.498609   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:14.498669   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:14.536357   66615 cri.go:89] found id: ""
	I0429 20:09:14.536384   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.536395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:14.536402   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:14.536460   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:14.577240   66615 cri.go:89] found id: ""
	I0429 20:09:14.577274   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.577284   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:14.577291   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:14.577372   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:14.617231   66615 cri.go:89] found id: ""
	I0429 20:09:14.617266   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.617279   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:14.617287   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:14.617355   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:14.659053   66615 cri.go:89] found id: ""
	I0429 20:09:14.659081   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.659090   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:14.659096   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:14.659145   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:14.708723   66615 cri.go:89] found id: ""
	I0429 20:09:14.708752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.708760   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:14.708766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:14.708814   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:14.753732   66615 cri.go:89] found id: ""
	I0429 20:09:14.753762   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.753773   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:14.753783   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:14.753798   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:14.771952   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:14.771985   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:14.842649   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:14.842680   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:14.842696   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:14.925565   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:14.925603   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:11.556903   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:14.057196   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:13.550999   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:16.054439   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:13.257735   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:15.756651   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:17.756760   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:14.975731   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:14.975765   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.528872   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:17.544373   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:17.544455   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:17.582977   66615 cri.go:89] found id: ""
	I0429 20:09:17.583001   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.583009   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:17.583014   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:17.583079   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:17.620322   66615 cri.go:89] found id: ""
	I0429 20:09:17.620352   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.620368   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:17.620373   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:17.620421   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:17.664339   66615 cri.go:89] found id: ""
	I0429 20:09:17.664367   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.664375   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:17.664381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:17.664433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:17.705150   66615 cri.go:89] found id: ""
	I0429 20:09:17.705175   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.705184   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:17.705189   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:17.705239   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:17.749713   66615 cri.go:89] found id: ""
	I0429 20:09:17.749738   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.749747   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:17.749752   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:17.749850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:17.791528   66615 cri.go:89] found id: ""
	I0429 20:09:17.791552   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.791560   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:17.791566   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:17.791615   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:17.834994   66615 cri.go:89] found id: ""
	I0429 20:09:17.835024   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.835035   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:17.835050   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:17.835107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:17.872194   66615 cri.go:89] found id: ""
	I0429 20:09:17.872226   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.872236   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:17.872248   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:17.872263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.926899   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:17.926936   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:17.944184   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:17.944218   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:18.029224   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:18.029246   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:18.029258   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:18.111112   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:18.111147   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:16.557282   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:19.056682   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:18.549106   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:20.550026   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:19.758897   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:22.257104   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:20.655965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:20.671420   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:20.671487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:20.710100   66615 cri.go:89] found id: ""
	I0429 20:09:20.710132   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.710144   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:20.710151   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:20.710221   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:20.748849   66615 cri.go:89] found id: ""
	I0429 20:09:20.748877   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.748888   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:20.748894   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:20.748956   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:20.788113   66615 cri.go:89] found id: ""
	I0429 20:09:20.788140   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.788151   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:20.788157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:20.788217   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:20.831432   66615 cri.go:89] found id: ""
	I0429 20:09:20.831455   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.831462   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:20.831470   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:20.831518   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:20.878156   66615 cri.go:89] found id: ""
	I0429 20:09:20.878183   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.878191   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:20.878197   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:20.878262   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:20.920691   66615 cri.go:89] found id: ""
	I0429 20:09:20.920718   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.920729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:20.920735   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:20.920795   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:20.960674   66615 cri.go:89] found id: ""
	I0429 20:09:20.960709   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.960719   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:20.960726   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:20.960786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:21.006462   66615 cri.go:89] found id: ""
	I0429 20:09:21.006486   66615 logs.go:276] 0 containers: []
	W0429 20:09:21.006495   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:21.006503   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:21.006518   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.060040   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:21.060076   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:21.077141   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:21.077171   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:21.157058   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:21.157083   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:21.157096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:21.265626   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:21.265662   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:23.813718   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:23.828338   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:23.828400   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:23.868730   66615 cri.go:89] found id: ""
	I0429 20:09:23.868760   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.868771   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:23.868776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:23.868842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:23.907919   66615 cri.go:89] found id: ""
	I0429 20:09:23.907941   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.907949   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:23.907956   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:23.908011   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:23.956769   66615 cri.go:89] found id: ""
	I0429 20:09:23.956794   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.956805   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:23.956811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:23.956875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:23.998578   66615 cri.go:89] found id: ""
	I0429 20:09:23.998612   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.998621   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:23.998628   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:23.998681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:24.037458   66615 cri.go:89] found id: ""
	I0429 20:09:24.037485   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.037492   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:24.037499   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:24.037562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:24.078305   66615 cri.go:89] found id: ""
	I0429 20:09:24.078336   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.078351   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:24.078358   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:24.078418   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:24.120100   66615 cri.go:89] found id: ""
	I0429 20:09:24.120129   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.120139   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:24.120147   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:24.120211   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:24.160953   66615 cri.go:89] found id: ""
	I0429 20:09:24.160988   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.161000   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:24.161012   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:24.161029   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:24.176654   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:24.176686   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:24.256631   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:24.256652   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:24.256668   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:24.335379   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:24.335424   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:24.379616   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:24.379649   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.556726   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:24.057483   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:23.050004   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:25.550882   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:27.551051   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:24.257726   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:26.757098   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:26.937283   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:26.956185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:26.956252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:26.997000   66615 cri.go:89] found id: ""
	I0429 20:09:26.997034   66615 logs.go:276] 0 containers: []
	W0429 20:09:26.997046   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:26.997053   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:26.997115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:27.042494   66615 cri.go:89] found id: ""
	I0429 20:09:27.042527   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.042538   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:27.042546   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:27.042608   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:27.086170   66615 cri.go:89] found id: ""
	I0429 20:09:27.086199   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.086211   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:27.086218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:27.086282   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:27.126502   66615 cri.go:89] found id: ""
	I0429 20:09:27.126531   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.126542   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:27.126560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:27.126635   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:27.175102   66615 cri.go:89] found id: ""
	I0429 20:09:27.175134   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.175142   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:27.175148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:27.175216   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:27.215983   66615 cri.go:89] found id: ""
	I0429 20:09:27.216013   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.216025   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:27.216033   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:27.216097   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:27.256427   66615 cri.go:89] found id: ""
	I0429 20:09:27.256456   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.256467   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:27.256474   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:27.256540   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:27.298444   66615 cri.go:89] found id: ""
	I0429 20:09:27.298479   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.298490   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:27.298501   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:27.298517   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:27.381579   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:27.381625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:27.429304   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:27.429350   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:27.483044   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:27.483082   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:27.500304   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:27.500332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:27.583909   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:26.555285   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:28.560544   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:30.049769   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:32.050537   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:29.256689   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:31.257554   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:30.084904   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:30.102417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:30.102486   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:30.146726   66615 cri.go:89] found id: ""
	I0429 20:09:30.146748   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.146755   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:30.146761   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:30.146809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:30.190739   66615 cri.go:89] found id: ""
	I0429 20:09:30.190768   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.190780   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:30.190788   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:30.190853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:30.228836   66615 cri.go:89] found id: ""
	I0429 20:09:30.228864   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.228879   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:30.228887   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:30.228951   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:30.270876   66615 cri.go:89] found id: ""
	I0429 20:09:30.270912   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.270920   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:30.270925   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:30.270995   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:30.310762   66615 cri.go:89] found id: ""
	I0429 20:09:30.310787   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.310795   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:30.310801   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:30.310850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:30.356339   66615 cri.go:89] found id: ""
	I0429 20:09:30.356363   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.356371   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:30.356376   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:30.356430   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:30.395540   66615 cri.go:89] found id: ""
	I0429 20:09:30.395575   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.395589   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:30.395598   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:30.395671   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:30.446237   66615 cri.go:89] found id: ""
	I0429 20:09:30.446263   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.446276   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:30.446286   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:30.446301   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:30.537309   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:30.537334   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:30.537349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:30.629116   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:30.629151   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:30.683308   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:30.683337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:30.735879   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:30.735910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.252322   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:33.268276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:33.268351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:33.309531   66615 cri.go:89] found id: ""
	I0429 20:09:33.309622   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.309641   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:33.309650   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:33.309719   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:33.367480   66615 cri.go:89] found id: ""
	I0429 20:09:33.367515   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.367527   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:33.367535   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:33.367595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:33.433717   66615 cri.go:89] found id: ""
	I0429 20:09:33.433742   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.433751   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:33.433756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:33.433820   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:33.484053   66615 cri.go:89] found id: ""
	I0429 20:09:33.484081   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.484093   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:33.484100   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:33.484165   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:33.524103   66615 cri.go:89] found id: ""
	I0429 20:09:33.524126   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.524136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:33.524143   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:33.524204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:33.565692   66615 cri.go:89] found id: ""
	I0429 20:09:33.565711   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.565719   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:33.565724   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:33.565784   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:33.607119   66615 cri.go:89] found id: ""
	I0429 20:09:33.607143   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.607153   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:33.607160   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:33.607225   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:33.648407   66615 cri.go:89] found id: ""
	I0429 20:09:33.648432   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.648440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:33.648449   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:33.648463   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:33.730744   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:33.730781   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:33.774295   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:33.774328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:33.829609   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:33.829653   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.846048   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:33.846092   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:33.924413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:31.056307   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:33.056538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:34.548872   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.550765   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:33.758571   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.257361   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.425072   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:36.440185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:36.440268   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:36.484364   66615 cri.go:89] found id: ""
	I0429 20:09:36.484386   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.484394   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:36.484400   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:36.484450   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:36.520436   66615 cri.go:89] found id: ""
	I0429 20:09:36.520466   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.520478   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:36.520487   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:36.520549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:36.563597   66615 cri.go:89] found id: ""
	I0429 20:09:36.563622   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.563630   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:36.563635   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:36.563704   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:36.613106   66615 cri.go:89] found id: ""
	I0429 20:09:36.613134   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.613143   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:36.613148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:36.613204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:36.658127   66615 cri.go:89] found id: ""
	I0429 20:09:36.658151   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.658159   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:36.658166   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:36.658229   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:36.707388   66615 cri.go:89] found id: ""
	I0429 20:09:36.707415   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.707423   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:36.707430   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:36.707479   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:36.753363   66615 cri.go:89] found id: ""
	I0429 20:09:36.753394   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.753405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:36.753413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:36.753475   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:36.801492   66615 cri.go:89] found id: ""
	I0429 20:09:36.801513   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.801521   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:36.801530   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:36.801542   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:36.857055   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:36.857108   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:36.874567   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:36.874595   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:36.956176   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:36.956202   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:36.956217   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:37.039958   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:37.039997   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:39.591442   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:39.607842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:39.607927   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:39.651917   66615 cri.go:89] found id: ""
	I0429 20:09:39.651941   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.651948   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:39.651955   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:39.652020   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:39.690032   66615 cri.go:89] found id: ""
	I0429 20:09:39.690059   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.690078   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:39.690086   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:39.690152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:39.733176   66615 cri.go:89] found id: ""
	I0429 20:09:39.733200   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.733209   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:39.733215   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:39.733261   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:39.779528   66615 cri.go:89] found id: ""
	I0429 20:09:39.779560   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.779572   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:39.779581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:39.779650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:39.822408   66615 cri.go:89] found id: ""
	I0429 20:09:39.822436   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.822445   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:39.822452   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:39.822522   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:39.864895   66615 cri.go:89] found id: ""
	I0429 20:09:39.864922   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.864930   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:39.864938   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:39.865008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:39.907498   66615 cri.go:89] found id: ""
	I0429 20:09:39.907523   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.907533   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:39.907539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:39.907606   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:39.948400   66615 cri.go:89] found id: ""
	I0429 20:09:39.948430   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.948440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:39.948449   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:39.948465   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:35.557262   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:38.056877   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:40.058568   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:39.049938   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:41.050139   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:38.756883   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:41.256775   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:39.964733   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:39.964763   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:40.043568   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:40.043593   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:40.043609   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:40.130776   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:40.130815   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:40.182011   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:40.182042   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:42.739068   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:42.756144   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:42.756286   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:42.798776   66615 cri.go:89] found id: ""
	I0429 20:09:42.798801   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.798810   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:42.798815   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:42.798861   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:42.837122   66615 cri.go:89] found id: ""
	I0429 20:09:42.837146   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.837154   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:42.837159   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:42.837205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:42.875435   66615 cri.go:89] found id: ""
	I0429 20:09:42.875461   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.875471   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:42.875479   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:42.875536   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:42.920044   66615 cri.go:89] found id: ""
	I0429 20:09:42.920076   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.920087   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:42.920094   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:42.920175   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:42.960122   66615 cri.go:89] found id: ""
	I0429 20:09:42.960152   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.960163   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:42.960169   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:42.960215   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:42.999784   66615 cri.go:89] found id: ""
	I0429 20:09:42.999811   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.999829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:42.999837   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:42.999917   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:43.040882   66615 cri.go:89] found id: ""
	I0429 20:09:43.040930   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.040952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:43.040959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:43.041044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:43.082596   66615 cri.go:89] found id: ""
	I0429 20:09:43.082627   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.082639   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:43.082650   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:43.082672   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:43.140302   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:43.140343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:43.157508   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:43.157547   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:43.241025   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:43.241047   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:43.241061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:43.325820   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:43.325855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:42.058727   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:44.556415   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:43.051020   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.550017   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:43.258400   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.756441   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:47.757029   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.871561   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:45.887323   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:45.887398   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:45.930021   66615 cri.go:89] found id: ""
	I0429 20:09:45.930050   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.930062   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:45.930088   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:45.930148   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:45.971404   66615 cri.go:89] found id: ""
	I0429 20:09:45.971434   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.971445   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:45.971452   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:45.971513   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:46.018801   66615 cri.go:89] found id: ""
	I0429 20:09:46.018825   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.018833   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:46.018838   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:46.018886   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:46.065118   66615 cri.go:89] found id: ""
	I0429 20:09:46.065140   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.065148   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:46.065153   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:46.065201   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:46.105244   66615 cri.go:89] found id: ""
	I0429 20:09:46.105271   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.105294   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:46.105309   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:46.105373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:46.153736   66615 cri.go:89] found id: ""
	I0429 20:09:46.153759   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.153768   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:46.153773   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:46.153836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:46.198940   66615 cri.go:89] found id: ""
	I0429 20:09:46.198965   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.198973   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:46.198979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:46.199064   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:46.238001   66615 cri.go:89] found id: ""
	I0429 20:09:46.238031   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.238044   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:46.238056   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:46.238087   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:46.292309   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:46.292357   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:46.307243   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:46.307274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:46.386832   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.386852   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:46.386869   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:46.468856   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:46.468891   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.017354   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:49.032753   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:49.032832   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:49.075345   66615 cri.go:89] found id: ""
	I0429 20:09:49.075375   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.075388   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:49.075394   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:49.075447   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:49.115294   66615 cri.go:89] found id: ""
	I0429 20:09:49.115328   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.115339   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:49.115347   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:49.115412   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:49.164115   66615 cri.go:89] found id: ""
	I0429 20:09:49.164140   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.164148   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:49.164154   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:49.164210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:49.207643   66615 cri.go:89] found id: ""
	I0429 20:09:49.207668   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.207679   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:49.207698   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:49.207762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:49.247121   66615 cri.go:89] found id: ""
	I0429 20:09:49.247147   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.247156   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:49.247162   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:49.247220   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:49.288594   66615 cri.go:89] found id: ""
	I0429 20:09:49.288626   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.288636   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:49.288643   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:49.288711   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:49.330243   66615 cri.go:89] found id: ""
	I0429 20:09:49.330273   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.330290   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:49.330300   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:49.330365   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:49.371304   66615 cri.go:89] found id: ""
	I0429 20:09:49.371348   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.371360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:49.371372   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:49.371392   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:49.450910   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:49.450949   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.494940   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:49.494970   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:49.553320   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:49.553364   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:49.568850   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:49.568878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:49.644932   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.559246   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:49.056790   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:48.050285   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:50.050579   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.549882   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:49.757113   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.258680   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.145702   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:52.162681   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:52.162756   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:52.204816   66615 cri.go:89] found id: ""
	I0429 20:09:52.204858   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.204870   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:52.204888   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:52.204963   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:52.248481   66615 cri.go:89] found id: ""
	I0429 20:09:52.248510   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.248519   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:52.248525   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:52.248596   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:52.289158   66615 cri.go:89] found id: ""
	I0429 20:09:52.289186   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.289194   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:52.289200   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:52.289260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:52.329905   66615 cri.go:89] found id: ""
	I0429 20:09:52.329931   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.329942   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:52.329950   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:52.330025   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:52.372523   66615 cri.go:89] found id: ""
	I0429 20:09:52.372546   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.372554   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:52.372560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:52.372623   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:52.414936   66615 cri.go:89] found id: ""
	I0429 20:09:52.414970   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.414982   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:52.414989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:52.415056   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:52.454139   66615 cri.go:89] found id: ""
	I0429 20:09:52.454164   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.454172   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:52.454178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:52.454236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:52.494093   66615 cri.go:89] found id: ""
	I0429 20:09:52.494129   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.494142   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:52.494155   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:52.494195   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:52.552104   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:52.552142   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:52.568430   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:52.568459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:52.649708   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:52.649736   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:52.649752   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:52.746231   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:52.746272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:51.057536   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:53.556862   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:55.049835   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:57.050606   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:54.759308   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:57.256396   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:55.296228   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:55.311257   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:55.311328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:55.352071   66615 cri.go:89] found id: ""
	I0429 20:09:55.352098   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.352109   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:55.352116   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:55.352177   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:55.399806   66615 cri.go:89] found id: ""
	I0429 20:09:55.399837   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.399847   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:55.399860   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:55.399947   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:55.444372   66615 cri.go:89] found id: ""
	I0429 20:09:55.444398   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.444406   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:55.444411   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:55.444468   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:55.485542   66615 cri.go:89] found id: ""
	I0429 20:09:55.485568   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.485579   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:55.485586   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:55.485670   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:55.535452   66615 cri.go:89] found id: ""
	I0429 20:09:55.535483   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.535494   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:55.535502   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:55.535566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:55.578009   66615 cri.go:89] found id: ""
	I0429 20:09:55.578036   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.578048   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:55.578056   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:55.578138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:55.618302   66615 cri.go:89] found id: ""
	I0429 20:09:55.618336   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.618347   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:55.618355   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:55.618419   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:55.660489   66615 cri.go:89] found id: ""
	I0429 20:09:55.660518   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.660526   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:55.660535   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:55.660548   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:55.713953   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:55.713993   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:55.729624   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:55.729656   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:55.813718   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:55.813746   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:55.813762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:55.898805   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:55.898849   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:58.467014   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:58.482852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:58.482925   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:58.522862   66615 cri.go:89] found id: ""
	I0429 20:09:58.522896   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.522908   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:58.522916   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:58.523000   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:58.568234   66615 cri.go:89] found id: ""
	I0429 20:09:58.568259   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.568266   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:58.568272   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:58.568327   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:58.609147   66615 cri.go:89] found id: ""
	I0429 20:09:58.609175   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.609185   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:58.609192   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:58.609265   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:58.657074   66615 cri.go:89] found id: ""
	I0429 20:09:58.657104   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.657115   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:58.657122   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:58.657186   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:58.706819   66615 cri.go:89] found id: ""
	I0429 20:09:58.706846   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.706857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:58.706865   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:58.706929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:58.754967   66615 cri.go:89] found id: ""
	I0429 20:09:58.754998   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.755007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:58.755018   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:58.755078   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:58.793657   66615 cri.go:89] found id: ""
	I0429 20:09:58.793694   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.793704   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:58.793709   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:58.793766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:58.832023   66615 cri.go:89] found id: ""
	I0429 20:09:58.832055   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.832066   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:58.832078   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:58.832094   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:58.886568   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:58.886605   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:58.902126   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:58.902154   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:58.986786   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:58.986814   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:58.986831   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:59.072258   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:59.072296   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:55.557245   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:58.056570   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:59.549825   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:02.050651   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:59.756493   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:01.756935   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:01.620172   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:01.636958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:01.637055   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:01.703865   66615 cri.go:89] found id: ""
	I0429 20:10:01.703890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.703899   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:01.703905   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:01.703950   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:01.742655   66615 cri.go:89] found id: ""
	I0429 20:10:01.742684   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.742692   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:01.742707   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:01.742778   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:01.782866   66615 cri.go:89] found id: ""
	I0429 20:10:01.782890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.782901   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:01.782908   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:01.782964   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:01.822958   66615 cri.go:89] found id: ""
	I0429 20:10:01.822984   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.822992   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:01.822997   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:01.823044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:01.868581   66615 cri.go:89] found id: ""
	I0429 20:10:01.868604   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.868612   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:01.868622   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:01.868675   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:01.908216   66615 cri.go:89] found id: ""
	I0429 20:10:01.908241   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.908249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:01.908255   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:01.908328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:01.953100   66615 cri.go:89] found id: ""
	I0429 20:10:01.953131   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.953142   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:01.953150   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:01.953213   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:01.999940   66615 cri.go:89] found id: ""
	I0429 20:10:01.999974   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.999988   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:01.999999   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:02.000012   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:02.061669   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:02.061704   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:02.077609   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:02.077640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:02.169643   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:02.169666   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:02.169679   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:02.250615   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:02.250657   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:04.803629   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:04.819286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:04.819364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:04.860501   66615 cri.go:89] found id: ""
	I0429 20:10:04.860530   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.860541   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:04.860548   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:04.860672   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:04.898444   66615 cri.go:89] found id: ""
	I0429 20:10:04.898472   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.898480   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:04.898486   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:04.898546   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:04.936569   66615 cri.go:89] found id: ""
	I0429 20:10:04.936599   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.936609   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:04.936617   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:04.936695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:00.556325   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:02.557754   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:05.058245   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:04.551711   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:07.050327   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:03.757096   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:06.257529   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:04.979667   66615 cri.go:89] found id: ""
	I0429 20:10:04.979696   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.979708   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:04.979715   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:04.979768   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:05.019608   66615 cri.go:89] found id: ""
	I0429 20:10:05.019638   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.019650   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:05.019658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:05.019724   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:05.063723   66615 cri.go:89] found id: ""
	I0429 20:10:05.063749   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.063758   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:05.063765   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:05.063821   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:05.106676   66615 cri.go:89] found id: ""
	I0429 20:10:05.106704   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.106714   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:05.106721   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:05.106783   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:05.147652   66615 cri.go:89] found id: ""
	I0429 20:10:05.147683   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.147693   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:05.147704   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:05.147721   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:05.189048   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:05.189085   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:05.248635   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:05.248669   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:05.265791   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:05.265826   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:05.343190   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:05.343217   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:05.343234   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:07.926868   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:07.942581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:07.942656   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:07.981316   66615 cri.go:89] found id: ""
	I0429 20:10:07.981349   66615 logs.go:276] 0 containers: []
	W0429 20:10:07.981361   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:07.981368   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:07.981429   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:08.024017   66615 cri.go:89] found id: ""
	I0429 20:10:08.024045   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.024056   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:08.024062   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:08.024146   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:08.075761   66615 cri.go:89] found id: ""
	I0429 20:10:08.075786   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.075798   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:08.075805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:08.075864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:08.146501   66615 cri.go:89] found id: ""
	I0429 20:10:08.146528   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.146536   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:08.146541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:08.146624   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:08.204987   66615 cri.go:89] found id: ""
	I0429 20:10:08.205013   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.205021   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:08.205027   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:08.205083   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:08.244930   66615 cri.go:89] found id: ""
	I0429 20:10:08.244959   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.244970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:08.244979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:08.245040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:08.284204   66615 cri.go:89] found id: ""
	I0429 20:10:08.284232   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.284243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:08.284250   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:08.284305   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:08.324077   66615 cri.go:89] found id: ""
	I0429 20:10:08.324102   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.324113   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:08.324123   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:08.324139   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:08.341584   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:08.341614   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:08.429808   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:08.429827   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:08.429840   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:08.509906   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:08.509942   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:08.562662   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:08.562697   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:07.557462   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:10.055718   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:09.553108   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:12.050533   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:12.543954   66218 pod_ready.go:81] duration metric: took 4m0.001047967s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" ...
	E0429 20:10:12.543994   66218 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 20:10:12.544032   66218 pod_ready.go:38] duration metric: took 4m6.615064199s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:10:12.544058   66218 kubeadm.go:591] duration metric: took 4m18.60301174s to restartPrimaryControlPlane
	W0429 20:10:12.544116   66218 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:12.544146   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:08.757127   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:10.760764   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:11.121673   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:11.137328   66615 kubeadm.go:591] duration metric: took 4m4.72832668s to restartPrimaryControlPlane
	W0429 20:10:11.137411   66615 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:11.137446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:13.254357   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.116867978s)
	I0429 20:10:13.254436   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:13.275293   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:13.287073   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:13.298046   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:13.298080   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:13.298132   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:13.311790   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:13.311861   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:13.323201   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:13.334284   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:13.334357   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:13.348597   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.361993   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:13.362055   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.376185   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:13.389715   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:13.389778   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:13.403955   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:13.675887   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:10:12.056403   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:14.059895   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:13.257345   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:15.257388   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:17.259138   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:16.557200   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:18.559617   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:19.756708   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:21.757655   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:21.056581   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:23.057477   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:24.256386   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:26.757303   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:25.556902   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:28.055172   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:30.056549   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:29.256790   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:31.757538   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:32.560174   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:35.056286   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:33.758717   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:36.257274   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:37.056603   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:39.557292   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:38.757913   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:40.758857   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:42.056927   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:44.557003   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:44.557038   66875 pod_ready.go:81] duration metric: took 4m0.008018273s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	E0429 20:10:44.557050   66875 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0429 20:10:44.557062   66875 pod_ready.go:38] duration metric: took 4m2.911025288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:10:44.557085   66875 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:10:44.557123   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:44.557191   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:44.620871   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:44.620900   66875 cri.go:89] found id: ""
	I0429 20:10:44.620910   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:44.620970   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.626852   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:44.626919   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:44.673726   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:44.673753   66875 cri.go:89] found id: ""
	I0429 20:10:44.673762   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:44.673827   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.680083   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:44.680157   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:44.724866   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:44.724899   66875 cri.go:89] found id: ""
	I0429 20:10:44.724909   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:44.724976   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.730438   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:44.730492   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:44.785159   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:44.785178   66875 cri.go:89] found id: ""
	I0429 20:10:44.785185   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:44.785230   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.790370   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:44.790432   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:44.839200   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:44.839219   66875 cri.go:89] found id: ""
	I0429 20:10:44.839226   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:44.839277   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.845411   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:44.845490   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:44.907184   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:44.907210   66875 cri.go:89] found id: ""
	I0429 20:10:44.907224   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:44.907281   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.914531   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:44.914596   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:44.957389   66875 cri.go:89] found id: ""
	I0429 20:10:44.957422   66875 logs.go:276] 0 containers: []
	W0429 20:10:44.957430   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:44.957436   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:44.957493   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:45.001760   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:45.001783   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:45.001789   66875 cri.go:89] found id: ""
	I0429 20:10:45.001796   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:45.001845   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:45.007293   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:45.012864   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:45.012886   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:45.406875   66218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.862702626s)
	I0429 20:10:45.406957   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:45.424927   66218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:45.436628   66218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:45.447896   66218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:45.447921   66218 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:45.447970   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:45.458604   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:45.458662   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:45.469701   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:45.479738   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:45.479796   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:45.490097   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:45.500840   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:45.500903   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:45.512918   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:45.524679   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:45.524756   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:45.536044   66218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:45.598481   66218 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:10:45.598556   66218 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:10:45.783162   66218 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:10:45.783321   66218 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:10:45.783481   66218 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:10:46.079842   66218 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:10:46.081981   66218 out.go:204]   - Generating certificates and keys ...
	I0429 20:10:46.082084   66218 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:10:46.082174   66218 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:10:46.082295   66218 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:10:46.082382   66218 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:10:46.082485   66218 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:10:46.082578   66218 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:10:46.082694   66218 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:10:46.082793   66218 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:10:46.082906   66218 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:10:46.082976   66218 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:10:46.083009   66218 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:10:46.083070   66218 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:10:46.242368   66218 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:10:46.667998   66218 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:10:46.832801   66218 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:10:47.033146   66218 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:10:47.265305   66218 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:10:47.266631   66218 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:10:47.271057   66218 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:10:47.273021   66218 out.go:204]   - Booting up control plane ...
	I0429 20:10:47.273128   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:10:47.273245   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:10:47.273333   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:10:47.293530   66218 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:10:47.294487   66218 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:10:47.294564   66218 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:10:47.435669   66218 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:10:47.435802   66218 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:10:43.256983   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:45.257106   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:47.757018   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:45.564197   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:45.564231   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:45.635133   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:45.635168   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:45.779957   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:45.779992   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:45.827796   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:45.827828   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:45.870603   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:45.870636   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:45.935181   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:45.935220   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:46.007476   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:46.007518   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:46.071132   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:46.071169   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:46.130185   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:46.130218   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:46.148649   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:46.148684   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:46.196227   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:46.196266   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:46.245663   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:46.245707   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:48.789522   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:48.810752   66875 api_server.go:72] duration metric: took 4m14.399329979s to wait for apiserver process to appear ...
	I0429 20:10:48.810785   66875 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:10:48.810826   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:48.810921   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:48.868391   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:48.868415   66875 cri.go:89] found id: ""
	I0429 20:10:48.868424   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:48.868490   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.874253   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:48.874329   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:48.934057   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:48.934103   66875 cri.go:89] found id: ""
	I0429 20:10:48.934113   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:48.934173   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.940161   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:48.940244   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:48.992205   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:48.992227   66875 cri.go:89] found id: ""
	I0429 20:10:48.992234   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:48.992297   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.997496   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:48.997568   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:49.038579   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:49.038612   66875 cri.go:89] found id: ""
	I0429 20:10:49.038622   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:49.038683   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.045062   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:49.045129   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:49.084533   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:49.084561   66875 cri.go:89] found id: ""
	I0429 20:10:49.084570   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:49.084628   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.089601   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:49.089680   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:49.133281   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:49.133315   66875 cri.go:89] found id: ""
	I0429 20:10:49.133324   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:49.133387   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.140784   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:49.140889   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:49.201071   66875 cri.go:89] found id: ""
	I0429 20:10:49.201102   66875 logs.go:276] 0 containers: []
	W0429 20:10:49.201112   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:49.201117   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:49.201182   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:49.248708   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:49.248732   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:49.248738   66875 cri.go:89] found id: ""
	I0429 20:10:49.248747   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:49.248807   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.254131   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.259257   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:49.259287   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:49.325386   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:49.325417   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:49.371335   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:49.371365   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:49.414056   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:49.414112   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:49.469457   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:49.469493   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:49.523091   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:49.523123   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:49.581937   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:49.581977   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:49.599704   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:49.599738   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:49.738943   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:49.738984   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:49.814482   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:49.814521   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:50.306035   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:50.306084   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:50.371400   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:50.371485   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:50.426578   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:50.426613   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:48.438095   66218 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002489157s
	I0429 20:10:48.438230   66218 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:10:49.758262   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:52.256578   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:53.941848   66218 kubeadm.go:309] [api-check] The API server is healthy after 5.503491397s
	I0429 20:10:53.961404   66218 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:10:53.979792   66218 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:10:54.018524   66218 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:10:54.018776   66218 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-456788 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:10:54.037050   66218 kubeadm.go:309] [bootstrap-token] Using token: 793n05.pmfi0tdyn7q4x0lt
	I0429 20:10:54.038421   66218 out.go:204]   - Configuring RBAC rules ...
	I0429 20:10:54.038551   66218 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:10:54.045190   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:10:54.054625   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:10:54.060216   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:10:54.068878   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:10:54.073537   66218 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:10:54.355285   66218 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:10:54.800956   66218 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:10:55.352995   66218 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:10:55.353026   66218 kubeadm.go:309] 
	I0429 20:10:55.353135   66218 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:10:55.353158   66218 kubeadm.go:309] 
	I0429 20:10:55.353245   66218 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:10:55.353254   66218 kubeadm.go:309] 
	I0429 20:10:55.353290   66218 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:10:55.353382   66218 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:10:55.353456   66218 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:10:55.353467   66218 kubeadm.go:309] 
	I0429 20:10:55.353564   66218 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:10:55.353578   66218 kubeadm.go:309] 
	I0429 20:10:55.353637   66218 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:10:55.353648   66218 kubeadm.go:309] 
	I0429 20:10:55.353735   66218 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:10:55.353937   66218 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:10:55.354052   66218 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:10:55.354095   66218 kubeadm.go:309] 
	I0429 20:10:55.354216   66218 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:10:55.354334   66218 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:10:55.354348   66218 kubeadm.go:309] 
	I0429 20:10:55.354464   66218 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 793n05.pmfi0tdyn7q4x0lt \
	I0429 20:10:55.354615   66218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 20:10:55.354643   66218 kubeadm.go:309] 	--control-plane 
	I0429 20:10:55.354667   66218 kubeadm.go:309] 
	I0429 20:10:55.354799   66218 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:10:55.354810   66218 kubeadm.go:309] 
	I0429 20:10:55.354943   66218 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 793n05.pmfi0tdyn7q4x0lt \
	I0429 20:10:55.355111   66218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 20:10:55.355493   66218 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:10:55.355513   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:10:55.355520   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:10:55.357341   66218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:10:52.999575   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:10:53.005598   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 200:
	ok
	I0429 20:10:53.006923   66875 api_server.go:141] control plane version: v1.30.0
	I0429 20:10:53.006951   66875 api_server.go:131] duration metric: took 4.196158371s to wait for apiserver health ...
	I0429 20:10:53.006978   66875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:10:53.007011   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:53.007073   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:53.064156   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:53.064186   66875 cri.go:89] found id: ""
	I0429 20:10:53.064196   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:53.064256   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.069282   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:53.069361   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:53.128981   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:53.129016   66875 cri.go:89] found id: ""
	I0429 20:10:53.129025   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:53.129086   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.134680   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:53.134779   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:53.188828   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:53.188857   66875 cri.go:89] found id: ""
	I0429 20:10:53.188869   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:53.188922   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.195332   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:53.195401   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:53.245528   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:53.245548   66875 cri.go:89] found id: ""
	I0429 20:10:53.245556   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:53.245617   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.251849   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:53.251925   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:53.302914   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:53.302941   66875 cri.go:89] found id: ""
	I0429 20:10:53.302950   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:53.303004   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.308072   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:53.308138   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:53.358655   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:53.358684   66875 cri.go:89] found id: ""
	I0429 20:10:53.358693   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:53.358753   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.363796   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:53.363875   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:53.413543   66875 cri.go:89] found id: ""
	I0429 20:10:53.413573   66875 logs.go:276] 0 containers: []
	W0429 20:10:53.413586   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:53.413593   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:53.413651   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:53.457365   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:53.457393   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:53.457399   66875 cri.go:89] found id: ""
	I0429 20:10:53.457409   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:53.457473   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.464321   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.469358   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:53.469377   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:53.605546   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:53.605594   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:53.682788   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:53.682837   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:53.725985   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:53.726017   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:53.775864   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:53.775890   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:53.834762   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:53.834801   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:53.853796   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:53.853830   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:53.915651   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:53.915680   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:53.968857   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:53.968885   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:54.024061   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:54.024090   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:54.079637   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:54.079674   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:54.129296   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:54.129325   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:54.499803   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:54.499861   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:57.070245   66875 system_pods.go:59] 8 kube-system pods found
	I0429 20:10:57.070288   66875 system_pods.go:61] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running
	I0429 20:10:57.070296   66875 system_pods.go:61] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running
	I0429 20:10:57.070302   66875 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running
	I0429 20:10:57.070308   66875 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running
	I0429 20:10:57.070313   66875 system_pods.go:61] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running
	I0429 20:10:57.070318   66875 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running
	I0429 20:10:57.070329   66875 system_pods.go:61] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:10:57.070339   66875 system_pods.go:61] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running
	I0429 20:10:57.070353   66875 system_pods.go:74] duration metric: took 4.063366088s to wait for pod list to return data ...
	I0429 20:10:57.070366   66875 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:10:57.077008   66875 default_sa.go:45] found service account: "default"
	I0429 20:10:57.077031   66875 default_sa.go:55] duration metric: took 6.655489ms for default service account to be created ...
	I0429 20:10:57.077040   66875 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:10:57.087665   66875 system_pods.go:86] 8 kube-system pods found
	I0429 20:10:57.087695   66875 system_pods.go:89] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running
	I0429 20:10:57.087701   66875 system_pods.go:89] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running
	I0429 20:10:57.087707   66875 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running
	I0429 20:10:57.087711   66875 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running
	I0429 20:10:57.087715   66875 system_pods.go:89] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running
	I0429 20:10:57.087719   66875 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running
	I0429 20:10:57.087726   66875 system_pods.go:89] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:10:57.087730   66875 system_pods.go:89] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running
	I0429 20:10:57.087740   66875 system_pods.go:126] duration metric: took 10.694398ms to wait for k8s-apps to be running ...
	I0429 20:10:57.087749   66875 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:10:57.087794   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:57.106878   66875 system_svc.go:56] duration metric: took 19.118595ms WaitForService to wait for kubelet
	I0429 20:10:57.106917   66875 kubeadm.go:576] duration metric: took 4m22.695498557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:10:57.106945   66875 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:10:57.111052   66875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:10:57.111082   66875 node_conditions.go:123] node cpu capacity is 2
	I0429 20:10:57.111096   66875 node_conditions.go:105] duration metric: took 4.144283ms to run NodePressure ...
	I0429 20:10:57.111112   66875 start.go:240] waiting for startup goroutines ...
	I0429 20:10:57.111122   66875 start.go:245] waiting for cluster config update ...
	I0429 20:10:57.111141   66875 start.go:254] writing updated cluster config ...
	I0429 20:10:57.111536   66875 ssh_runner.go:195] Run: rm -f paused
	I0429 20:10:57.169536   66875 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:10:57.172347   66875 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-866143" cluster and "default" namespace by default
	I0429 20:10:55.358683   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:10:55.371397   66218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:10:55.397119   66218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:10:55.397192   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:55.397192   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-456788 minikube.k8s.io/updated_at=2024_04_29T20_10_55_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=no-preload-456788 minikube.k8s.io/primary=true
	I0429 20:10:55.605222   66218 ops.go:34] apiserver oom_adj: -16
	I0429 20:10:55.605588   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:56.106450   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:56.605894   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:57.105657   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:57.605823   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:54.258101   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:56.258336   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:58.106263   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:58.605675   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:59.106483   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:59.605671   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:00.105670   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:00.605695   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:01.106482   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:01.606206   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:02.106534   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:02.606372   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:58.756416   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:00.756875   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:02.756955   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:03.106555   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:03.606298   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.106227   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.606531   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:05.105708   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:05.605735   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:06.106556   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:06.606380   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:07.105690   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:07.605718   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.749964   65980 pod_ready.go:81] duration metric: took 4m0.000195525s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" ...
	E0429 20:11:04.749999   65980 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 20:11:04.750024   65980 pod_ready.go:38] duration metric: took 4m6.211964949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:04.750053   65980 kubeadm.go:591] duration metric: took 4m17.268163648s to restartPrimaryControlPlane
	W0429 20:11:04.750123   65980 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:11:04.750156   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:11:08.106383   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:08.606498   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:08.726533   66218 kubeadm.go:1107] duration metric: took 13.329402445s to wait for elevateKubeSystemPrivileges
	W0429 20:11:08.726584   66218 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:11:08.726596   66218 kubeadm.go:393] duration metric: took 5m14.838913251s to StartCluster
	I0429 20:11:08.726617   66218 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:08.726706   66218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:11:08.729364   66218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:08.730202   66218 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:11:08.731600   66218 out.go:177] * Verifying Kubernetes components...
	I0429 20:11:08.730245   66218 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:11:08.730446   66218 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:11:08.733479   66218 addons.go:69] Setting storage-provisioner=true in profile "no-preload-456788"
	I0429 20:11:08.733509   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:11:08.733518   66218 addons.go:69] Setting default-storageclass=true in profile "no-preload-456788"
	I0429 20:11:08.733540   66218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-456788"
	I0429 20:11:08.733514   66218 addons.go:234] Setting addon storage-provisioner=true in "no-preload-456788"
	W0429 20:11:08.733641   66218 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:11:08.733674   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.733963   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.733988   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.734081   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.734079   66218 addons.go:69] Setting metrics-server=true in profile "no-preload-456788"
	I0429 20:11:08.734106   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.734117   66218 addons.go:234] Setting addon metrics-server=true in "no-preload-456788"
	W0429 20:11:08.734126   66218 addons.go:243] addon metrics-server should already be in state true
	I0429 20:11:08.734154   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.734503   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.734536   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.754451   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0429 20:11:08.754650   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0429 20:11:08.754827   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I0429 20:11:08.755114   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755237   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755332   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755884   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.755905   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756031   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.756048   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.756050   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756062   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756456   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756477   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756513   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756853   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.757231   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.757254   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.757256   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.757291   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.761534   66218 addons.go:234] Setting addon default-storageclass=true in "no-preload-456788"
	W0429 20:11:08.761551   66218 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:11:08.761574   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.761857   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.761894   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.776659   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0429 20:11:08.776838   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0429 20:11:08.777067   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.777462   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.777643   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.777657   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.778152   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.778162   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.778170   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.778371   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.778845   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.778901   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0429 20:11:08.779220   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.779415   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.779446   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.779621   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.779634   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.780051   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.780246   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.780506   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.782432   66218 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:11:08.783809   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:11:08.783825   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:11:08.783843   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.782370   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.786004   66218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:11:08.787488   66218 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:08.787506   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:11:08.787663   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.788245   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.788290   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.788308   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.788381   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.788632   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.788834   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.788985   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:08.791587   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.791964   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.792052   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.792293   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.792477   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.792614   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.792712   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:08.798944   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I0429 20:11:08.799562   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.800224   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.800243   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.800790   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.801008   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.803220   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.803519   66218 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:08.803534   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:11:08.803552   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.806797   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.807216   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.807244   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.807540   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.807986   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.808170   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.808313   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:09.006753   66218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:11:09.038156   66218 node_ready.go:35] waiting up to 6m0s for node "no-preload-456788" to be "Ready" ...
	I0429 20:11:09.051516   66218 node_ready.go:49] node "no-preload-456788" has status "Ready":"True"
	I0429 20:11:09.051545   66218 node_ready.go:38] duration metric: took 13.34705ms for node "no-preload-456788" to be "Ready" ...
	I0429 20:11:09.051557   66218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:09.064032   66218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:09.308339   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:09.308749   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:11:09.308773   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:11:09.309961   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:09.347829   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:11:09.347860   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:11:09.466683   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:11:09.466718   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:11:09.678800   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:11:09.718867   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.718899   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.719248   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.719276   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.719273   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:09.719288   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.719296   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.719553   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.719574   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.719581   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:09.726177   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.726204   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.726527   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.726544   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.726590   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:10.570942   66218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.260944092s)
	I0429 20:11:10.571001   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.571012   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.571480   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.571504   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.571520   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.571528   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.571792   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:10.571818   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.571833   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.912211   66218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.233359134s)
	I0429 20:11:10.912282   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.912298   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.912746   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.912769   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.912779   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.912787   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.913055   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.913108   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.913132   66218 addons.go:470] Verifying addon metrics-server=true in "no-preload-456788"
	I0429 20:11:10.916694   66218 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0429 20:11:10.918273   66218 addons.go:505] duration metric: took 2.188028967s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0429 20:11:11.108067   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.108091   66218 pod_ready.go:81] duration metric: took 2.044032617s for pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.108103   66218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.115163   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.115196   66218 pod_ready.go:81] duration metric: took 7.084503ms for pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.115210   66218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.129264   66218 pod_ready.go:92] pod "etcd-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.129286   66218 pod_ready.go:81] duration metric: took 14.068541ms for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.129297   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.148114   66218 pod_ready.go:92] pod "kube-apiserver-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.148142   66218 pod_ready.go:81] duration metric: took 18.837962ms for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.148155   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.157985   66218 pod_ready.go:92] pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.158006   66218 pod_ready.go:81] duration metric: took 9.844321ms for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.158016   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6m95d" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.469680   66218 pod_ready.go:92] pod "kube-proxy-6m95d" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.469701   66218 pod_ready.go:81] duration metric: took 311.678646ms for pod "kube-proxy-6m95d" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.469710   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.868513   66218 pod_ready.go:92] pod "kube-scheduler-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.868539   66218 pod_ready.go:81] duration metric: took 398.821528ms for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.868550   66218 pod_ready.go:38] duration metric: took 2.816983409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:11.868569   66218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:11:11.868632   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:11:11.885115   66218 api_server.go:72] duration metric: took 3.154873937s to wait for apiserver process to appear ...
	I0429 20:11:11.885146   66218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:11:11.885169   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:11:11.890715   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0429 20:11:11.891649   66218 api_server.go:141] control plane version: v1.30.0
	I0429 20:11:11.891671   66218 api_server.go:131] duration metric: took 6.518818ms to wait for apiserver health ...
	I0429 20:11:11.891679   66218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:11:12.072142   66218 system_pods.go:59] 9 kube-system pods found
	I0429 20:11:12.072175   66218 system_pods.go:61] "coredns-7db6d8ff4d-hcfbq" [c0b53824-478e-4523-ada4-1cd7ba306c81] Running
	I0429 20:11:12.072183   66218 system_pods.go:61] "coredns-7db6d8ff4d-pvhwv" [f38ee7b3-53fe-4609-9b2b-000f55de5d5c] Running
	I0429 20:11:12.072188   66218 system_pods.go:61] "etcd-no-preload-456788" [b0629d4c-643a-485d-aa85-33fe009fff50] Running
	I0429 20:11:12.072194   66218 system_pods.go:61] "kube-apiserver-no-preload-456788" [e56edf5c-9883-4cd9-abab-09902048f584] Running
	I0429 20:11:12.072200   66218 system_pods.go:61] "kube-controller-manager-no-preload-456788" [bfaf44f0-da19-4cec-bec9-d9917cb8a571] Running
	I0429 20:11:12.072205   66218 system_pods.go:61] "kube-proxy-6m95d" [25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7] Running
	I0429 20:11:12.072209   66218 system_pods.go:61] "kube-scheduler-no-preload-456788" [de4f90f7-05d6-4755-a4c0-2c522f7fe88c] Running
	I0429 20:11:12.072217   66218 system_pods.go:61] "metrics-server-569cc877fc-sxgwr" [046d28fe-d51e-43ba-9550-d1d7e33d9d84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:11:12.072224   66218 system_pods.go:61] "storage-provisioner" [fd1c4813-8889-4f21-b21e-6007eaa163a6] Running
	I0429 20:11:12.072247   66218 system_pods.go:74] duration metric: took 180.561509ms to wait for pod list to return data ...
	I0429 20:11:12.072256   66218 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:11:12.267637   66218 default_sa.go:45] found service account: "default"
	I0429 20:11:12.267663   66218 default_sa.go:55] duration metric: took 195.398841ms for default service account to be created ...
	I0429 20:11:12.267677   66218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:11:12.471933   66218 system_pods.go:86] 9 kube-system pods found
	I0429 20:11:12.471967   66218 system_pods.go:89] "coredns-7db6d8ff4d-hcfbq" [c0b53824-478e-4523-ada4-1cd7ba306c81] Running
	I0429 20:11:12.471975   66218 system_pods.go:89] "coredns-7db6d8ff4d-pvhwv" [f38ee7b3-53fe-4609-9b2b-000f55de5d5c] Running
	I0429 20:11:12.471981   66218 system_pods.go:89] "etcd-no-preload-456788" [b0629d4c-643a-485d-aa85-33fe009fff50] Running
	I0429 20:11:12.471987   66218 system_pods.go:89] "kube-apiserver-no-preload-456788" [e56edf5c-9883-4cd9-abab-09902048f584] Running
	I0429 20:11:12.471994   66218 system_pods.go:89] "kube-controller-manager-no-preload-456788" [bfaf44f0-da19-4cec-bec9-d9917cb8a571] Running
	I0429 20:11:12.471999   66218 system_pods.go:89] "kube-proxy-6m95d" [25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7] Running
	I0429 20:11:12.472008   66218 system_pods.go:89] "kube-scheduler-no-preload-456788" [de4f90f7-05d6-4755-a4c0-2c522f7fe88c] Running
	I0429 20:11:12.472020   66218 system_pods.go:89] "metrics-server-569cc877fc-sxgwr" [046d28fe-d51e-43ba-9550-d1d7e33d9d84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:11:12.472027   66218 system_pods.go:89] "storage-provisioner" [fd1c4813-8889-4f21-b21e-6007eaa163a6] Running
	I0429 20:11:12.472039   66218 system_pods.go:126] duration metric: took 204.355515ms to wait for k8s-apps to be running ...
	I0429 20:11:12.472052   66218 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:11:12.472110   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:11:12.487748   66218 system_svc.go:56] duration metric: took 15.68796ms WaitForService to wait for kubelet
	I0429 20:11:12.487779   66218 kubeadm.go:576] duration metric: took 3.757538662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:11:12.487804   66218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:11:12.668597   66218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:11:12.668619   66218 node_conditions.go:123] node cpu capacity is 2
	I0429 20:11:12.668629   66218 node_conditions.go:105] duration metric: took 180.819727ms to run NodePressure ...
	I0429 20:11:12.668640   66218 start.go:240] waiting for startup goroutines ...
	I0429 20:11:12.668646   66218 start.go:245] waiting for cluster config update ...
	I0429 20:11:12.668656   66218 start.go:254] writing updated cluster config ...
	I0429 20:11:12.668905   66218 ssh_runner.go:195] Run: rm -f paused
	I0429 20:11:12.718997   66218 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:11:12.720757   66218 out.go:177] * Done! kubectl is now configured to use "no-preload-456788" cluster and "default" namespace by default
	I0429 20:11:37.819019   65980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.068841912s)
	I0429 20:11:37.819092   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:11:37.836850   65980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:11:37.849684   65980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:11:37.861597   65980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:11:37.861626   65980 kubeadm.go:156] found existing configuration files:
	
	I0429 20:11:37.861674   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:11:37.872799   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:11:37.872860   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:11:37.884336   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:11:37.895124   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:11:37.895181   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:11:37.906874   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:11:37.917482   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:11:37.917530   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:11:37.928137   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:11:37.938698   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:11:37.938750   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:11:37.949658   65980 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:11:38.159358   65980 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:11:46.848042   65980 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:11:46.848108   65980 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:11:46.848169   65980 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:11:46.848308   65980 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:11:46.848447   65980 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:11:46.848531   65980 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:11:46.850368   65980 out.go:204]   - Generating certificates and keys ...
	I0429 20:11:46.850444   65980 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:11:46.850496   65980 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:11:46.850580   65980 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:11:46.850649   65980 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:11:46.850742   65980 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:11:46.850850   65980 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:11:46.850949   65980 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:11:46.851018   65980 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:11:46.851117   65980 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:11:46.851201   65980 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:11:46.851263   65980 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:11:46.851327   65980 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:11:46.851395   65980 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:11:46.851466   65980 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:11:46.851513   65980 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:11:46.851605   65980 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:11:46.851690   65980 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:11:46.851791   65980 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:11:46.851878   65980 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:11:46.853420   65980 out.go:204]   - Booting up control plane ...
	I0429 20:11:46.853526   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:11:46.853617   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:11:46.853696   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:11:46.853791   65980 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:11:46.853866   65980 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:11:46.853900   65980 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:11:46.854010   65980 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:11:46.854094   65980 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:11:46.854148   65980 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.976221ms
	I0429 20:11:46.854240   65980 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:11:46.854311   65980 kubeadm.go:309] [api-check] The API server is healthy after 5.50298765s
	I0429 20:11:46.854407   65980 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:11:46.854509   65980 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:11:46.854565   65980 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:11:46.854726   65980 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-161370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:11:46.854783   65980 kubeadm.go:309] [bootstrap-token] Using token: 93xwhj.zowa67wvl54p1iru
	I0429 20:11:46.856308   65980 out.go:204]   - Configuring RBAC rules ...
	I0429 20:11:46.856452   65980 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:11:46.856561   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:11:46.856736   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:11:46.856867   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:11:46.857018   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:11:46.857140   65980 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:11:46.857294   65980 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:11:46.857358   65980 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:11:46.857419   65980 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:11:46.857428   65980 kubeadm.go:309] 
	I0429 20:11:46.857502   65980 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:11:46.857514   65980 kubeadm.go:309] 
	I0429 20:11:46.857606   65980 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:11:46.857617   65980 kubeadm.go:309] 
	I0429 20:11:46.857649   65980 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:11:46.857725   65980 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:11:46.857797   65980 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:11:46.857806   65980 kubeadm.go:309] 
	I0429 20:11:46.857880   65980 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:11:46.857889   65980 kubeadm.go:309] 
	I0429 20:11:46.857947   65980 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:11:46.857955   65980 kubeadm.go:309] 
	I0429 20:11:46.858020   65980 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:11:46.858125   65980 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:11:46.858216   65980 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:11:46.858224   65980 kubeadm.go:309] 
	I0429 20:11:46.858325   65980 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:11:46.858433   65980 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:11:46.858442   65980 kubeadm.go:309] 
	I0429 20:11:46.858553   65980 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 93xwhj.zowa67wvl54p1iru \
	I0429 20:11:46.858696   65980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 20:11:46.858722   65980 kubeadm.go:309] 	--control-plane 
	I0429 20:11:46.858728   65980 kubeadm.go:309] 
	I0429 20:11:46.858797   65980 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:11:46.858803   65980 kubeadm.go:309] 
	I0429 20:11:46.858881   65980 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 93xwhj.zowa67wvl54p1iru \
	I0429 20:11:46.859014   65980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 20:11:46.859025   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:11:46.859034   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:11:46.861619   65980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:11:46.863111   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:11:46.875965   65980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:11:46.897147   65980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:11:46.897225   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:46.897238   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-161370 minikube.k8s.io/updated_at=2024_04_29T20_11_46_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=embed-certs-161370 minikube.k8s.io/primary=true
	I0429 20:11:46.927555   65980 ops.go:34] apiserver oom_adj: -16
	I0429 20:11:47.119594   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:47.620640   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:48.119974   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:48.620618   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:49.120107   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:49.620349   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:50.120180   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:50.620533   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:51.120332   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:51.620669   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:52.119922   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:52.620467   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:53.120486   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:53.620314   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:54.120159   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:54.620430   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:55.119995   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:55.620496   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:56.120152   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:56.620390   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:57.120090   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:57.619671   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:58.120549   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:58.620334   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.120532   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.619732   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.765502   65980 kubeadm.go:1107] duration metric: took 12.868344365s to wait for elevateKubeSystemPrivileges
	W0429 20:11:59.765550   65980 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:11:59.765561   65980 kubeadm.go:393] duration metric: took 5m12.339650014s to StartCluster
	I0429 20:11:59.765582   65980 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:59.765671   65980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:11:59.767924   65980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:59.768253   65980 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:11:59.769950   65980 out.go:177] * Verifying Kubernetes components...
	I0429 20:11:59.768323   65980 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:11:59.768433   65980 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:11:59.771281   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:11:59.771300   65980 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-161370"
	I0429 20:11:59.771313   65980 addons.go:69] Setting default-storageclass=true in profile "embed-certs-161370"
	I0429 20:11:59.771332   65980 addons.go:69] Setting metrics-server=true in profile "embed-certs-161370"
	I0429 20:11:59.771344   65980 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-161370"
	W0429 20:11:59.771355   65980 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:11:59.771361   65980 addons.go:234] Setting addon metrics-server=true in "embed-certs-161370"
	W0429 20:11:59.771370   65980 addons.go:243] addon metrics-server should already be in state true
	I0429 20:11:59.771399   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.771401   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.771354   65980 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-161370"
	I0429 20:11:59.771757   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771768   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771772   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771783   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.771786   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.771788   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.787359   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0429 20:11:59.787384   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0429 20:11:59.787503   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46153
	I0429 20:11:59.787764   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.787987   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.788069   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.788254   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788273   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.788708   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788724   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.788773   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.788832   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788852   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.789102   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.789117   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.789267   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.789478   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.789510   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.790170   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.790220   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.792108   65980 addons.go:234] Setting addon default-storageclass=true in "embed-certs-161370"
	W0429 20:11:59.792127   65980 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:11:59.792154   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.792386   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.792424   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.808581   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35659
	I0429 20:11:59.808924   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44943
	I0429 20:11:59.808943   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.809461   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.809481   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.809561   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.809791   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.810335   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.810357   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.810976   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.810992   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.811324   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.811604   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0429 20:11:59.811758   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.812141   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.812592   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.812610   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.813130   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.813351   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.813614   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.815589   65980 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:11:59.817004   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:11:59.817014   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:11:59.817027   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.815020   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.818585   65980 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:11:59.820110   65980 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:59.820125   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:11:59.820140   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.819840   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.820305   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.820333   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.820563   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.820722   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.820874   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.820998   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.822849   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.823299   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.823323   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.823460   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.823599   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.823924   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.824039   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.827552   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0429 20:11:59.827976   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.828369   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.828389   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.828754   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.828921   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.830295   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.830566   65980 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:59.830578   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:11:59.830590   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.833174   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.833526   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.833545   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.833759   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.833910   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.834029   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.834166   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.978978   65980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:11:59.995547   65980 node_ready.go:35] waiting up to 6m0s for node "embed-certs-161370" to be "Ready" ...
	I0429 20:12:00.003802   65980 node_ready.go:49] node "embed-certs-161370" has status "Ready":"True"
	I0429 20:12:00.003823   65980 node_ready.go:38] duration metric: took 8.245639ms for node "embed-certs-161370" to be "Ready" ...
	I0429 20:12:00.003833   65980 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:12:00.010487   65980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:00.072627   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:12:00.075716   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:12:00.177043   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:12:00.177069   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:12:00.278082   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:12:00.278112   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:12:00.311731   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:12:00.311756   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:12:00.369982   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:12:00.642840   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.642865   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643084   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.643109   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643227   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.643240   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.643248   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.643256   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643374   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.645085   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645103   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.645112   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.645121   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.645196   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645228   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.645231   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.645331   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645343   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.658929   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.658955   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.659236   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.659267   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.659281   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.103183   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:01.103207   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:01.103488   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:01.103542   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.103557   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:01.103541   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:01.103584   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:01.105440   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:01.105461   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.105473   65980 addons.go:470] Verifying addon metrics-server=true in "embed-certs-161370"
	I0429 20:12:01.107435   65980 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0429 20:12:01.109051   65980 addons.go:505] duration metric: took 1.340729876s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0429 20:12:02.029772   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace has status "Ready":"False"
	I0429 20:12:02.520396   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.520417   65980 pod_ready.go:81] duration metric: took 2.509903724s for pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.520426   65980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.529115   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.529141   65980 pod_ready.go:81] duration metric: took 8.707165ms for pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.529153   65980 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.539459   65980 pod_ready.go:92] pod "etcd-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.539478   65980 pod_ready.go:81] duration metric: took 10.318294ms for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.539489   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.543813   65980 pod_ready.go:92] pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.543830   65980 pod_ready.go:81] duration metric: took 4.333619ms for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.543839   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.549343   65980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.549363   65980 pod_ready.go:81] duration metric: took 5.516323ms for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.549374   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wq48j" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.915209   65980 pod_ready.go:92] pod "kube-proxy-wq48j" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.915232   65980 pod_ready.go:81] duration metric: took 365.851814ms for pod "kube-proxy-wq48j" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.915240   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:03.315564   65980 pod_ready.go:92] pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:03.315587   65980 pod_ready.go:81] duration metric: took 400.340876ms for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:03.315595   65980 pod_ready.go:38] duration metric: took 3.311752591s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:12:03.315609   65980 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:12:03.315655   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:12:03.333491   65980 api_server.go:72] duration metric: took 3.565200855s to wait for apiserver process to appear ...
	I0429 20:12:03.333521   65980 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:12:03.333538   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:12:03.338822   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0429 20:12:03.339975   65980 api_server.go:141] control plane version: v1.30.0
	I0429 20:12:03.339995   65980 api_server.go:131] duration metric: took 6.468233ms to wait for apiserver health ...
	I0429 20:12:03.340002   65980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:12:03.519016   65980 system_pods.go:59] 9 kube-system pods found
	I0429 20:12:03.519042   65980 system_pods.go:61] "coredns-7db6d8ff4d-7z6zv" [422451a2-615d-4bf8-8de8-d5fa5805219f] Running
	I0429 20:12:03.519047   65980 system_pods.go:61] "coredns-7db6d8ff4d-rr6bd" [6d14ff20-6dab-4c02-b91c-0a1e326f1593] Running
	I0429 20:12:03.519050   65980 system_pods.go:61] "etcd-embed-certs-161370" [ab19e79c-18bd-4d0d-b5cf-639453495383] Running
	I0429 20:12:03.519055   65980 system_pods.go:61] "kube-apiserver-embed-certs-161370" [6091dd0a-333d-4729-97db-eb7a30755db4] Running
	I0429 20:12:03.519059   65980 system_pods.go:61] "kube-controller-manager-embed-certs-161370" [de70d57c-9329-4d37-a838-9c9ae1e41871] Running
	I0429 20:12:03.519061   65980 system_pods.go:61] "kube-proxy-wq48j" [3b3b23ef-b5b4-4754-bc44-73e1d51a18d7] Running
	I0429 20:12:03.519065   65980 system_pods.go:61] "kube-scheduler-embed-certs-161370" [c7fd3d36-4e35-43b2-93e7-45129464937d] Running
	I0429 20:12:03.519071   65980 system_pods.go:61] "metrics-server-569cc877fc-x2wb6" [cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:12:03.519075   65980 system_pods.go:61] "storage-provisioner" [93e046a1-3867-44e1-8a4f-cf0eba6dfd6b] Running
	I0429 20:12:03.519082   65980 system_pods.go:74] duration metric: took 179.075384ms to wait for pod list to return data ...
	I0429 20:12:03.519089   65980 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:12:03.714354   65980 default_sa.go:45] found service account: "default"
	I0429 20:12:03.714384   65980 default_sa.go:55] duration metric: took 195.287433ms for default service account to be created ...
	I0429 20:12:03.714395   65980 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:12:03.918729   65980 system_pods.go:86] 9 kube-system pods found
	I0429 20:12:03.918755   65980 system_pods.go:89] "coredns-7db6d8ff4d-7z6zv" [422451a2-615d-4bf8-8de8-d5fa5805219f] Running
	I0429 20:12:03.918760   65980 system_pods.go:89] "coredns-7db6d8ff4d-rr6bd" [6d14ff20-6dab-4c02-b91c-0a1e326f1593] Running
	I0429 20:12:03.918765   65980 system_pods.go:89] "etcd-embed-certs-161370" [ab19e79c-18bd-4d0d-b5cf-639453495383] Running
	I0429 20:12:03.918769   65980 system_pods.go:89] "kube-apiserver-embed-certs-161370" [6091dd0a-333d-4729-97db-eb7a30755db4] Running
	I0429 20:12:03.918773   65980 system_pods.go:89] "kube-controller-manager-embed-certs-161370" [de70d57c-9329-4d37-a838-9c9ae1e41871] Running
	I0429 20:12:03.918777   65980 system_pods.go:89] "kube-proxy-wq48j" [3b3b23ef-b5b4-4754-bc44-73e1d51a18d7] Running
	I0429 20:12:03.918780   65980 system_pods.go:89] "kube-scheduler-embed-certs-161370" [c7fd3d36-4e35-43b2-93e7-45129464937d] Running
	I0429 20:12:03.918787   65980 system_pods.go:89] "metrics-server-569cc877fc-x2wb6" [cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:12:03.918791   65980 system_pods.go:89] "storage-provisioner" [93e046a1-3867-44e1-8a4f-cf0eba6dfd6b] Running
	I0429 20:12:03.918800   65980 system_pods.go:126] duration metric: took 204.399385ms to wait for k8s-apps to be running ...
	I0429 20:12:03.918809   65980 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:12:03.918851   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:03.937870   65980 system_svc.go:56] duration metric: took 19.05503ms WaitForService to wait for kubelet
	I0429 20:12:03.937892   65980 kubeadm.go:576] duration metric: took 4.169607456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:12:03.937910   65980 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:12:04.116479   65980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:12:04.116504   65980 node_conditions.go:123] node cpu capacity is 2
	I0429 20:12:04.116513   65980 node_conditions.go:105] duration metric: took 178.599246ms to run NodePressure ...
	I0429 20:12:04.116524   65980 start.go:240] waiting for startup goroutines ...
	I0429 20:12:04.116530   65980 start.go:245] waiting for cluster config update ...
	I0429 20:12:04.116540   65980 start.go:254] writing updated cluster config ...
	I0429 20:12:04.116799   65980 ssh_runner.go:195] Run: rm -f paused
	I0429 20:12:04.167803   65980 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:12:04.169861   65980 out.go:177] * Done! kubectl is now configured to use "embed-certs-161370" cluster and "default" namespace by default
	I0429 20:12:09.853929   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:12:09.854036   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:12:09.856141   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:12:09.856215   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:12:09.856314   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:12:09.856435   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:12:09.856529   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:12:09.856638   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:12:09.858658   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:12:09.858759   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:12:09.858821   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:12:09.858914   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:12:09.858967   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:12:09.859049   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:12:09.859118   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:12:09.859197   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:12:09.859311   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:12:09.859435   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:12:09.859548   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:12:09.859605   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:12:09.859678   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:12:09.859766   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:12:09.859856   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:12:09.859947   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:12:09.860025   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:12:09.860149   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:12:09.860228   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:12:09.860289   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:12:09.860390   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:12:09.862098   66615 out.go:204]   - Booting up control plane ...
	I0429 20:12:09.862211   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:12:09.862298   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:12:09.862360   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:12:09.862484   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:12:09.862720   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:12:09.862794   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:12:09.862882   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863117   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863244   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863470   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863544   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863814   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863895   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864144   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864223   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864393   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864408   66615 kubeadm.go:309] 
	I0429 20:12:09.864473   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:12:09.864526   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:12:09.864543   66615 kubeadm.go:309] 
	I0429 20:12:09.864589   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:12:09.864638   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:12:09.864779   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:12:09.864789   66615 kubeadm.go:309] 
	I0429 20:12:09.864911   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:12:09.864971   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:12:09.865026   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:12:09.865033   66615 kubeadm.go:309] 
	I0429 20:12:09.865150   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:12:09.865228   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:12:09.865241   66615 kubeadm.go:309] 
	I0429 20:12:09.865404   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:12:09.865538   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:12:09.865651   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:12:09.865755   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:12:09.865828   66615 kubeadm.go:309] 
	W0429 20:12:09.865940   66615 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0429 20:12:09.866027   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:12:10.987703   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.121642991s)
	I0429 20:12:10.987802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:11.007295   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:12:11.020772   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:12:11.020790   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:12:11.020838   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:12:11.033334   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:12:11.033405   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:12:11.044565   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:12:11.057087   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:12:11.057143   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:12:11.069908   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.082866   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:12:11.082920   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.096659   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:12:11.110106   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:12:11.110166   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:12:11.124952   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:12:11.396252   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:14:07.831448   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:14:07.831556   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:14:07.833111   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:14:07.833179   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:14:07.833288   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:14:07.833421   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:14:07.833530   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:14:07.833616   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:14:07.835518   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:14:07.835623   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:14:07.835703   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:14:07.835776   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:14:07.835839   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:14:07.835893   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:14:07.835957   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:14:07.836039   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:14:07.836129   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:14:07.836238   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:14:07.836350   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:14:07.836394   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:14:07.836441   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:14:07.836488   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:14:07.836559   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:14:07.836637   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:14:07.836683   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:14:07.836778   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:14:07.836854   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:14:07.836895   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:14:07.836950   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:14:07.838553   66615 out.go:204]   - Booting up control plane ...
	I0429 20:14:07.838635   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:14:07.838718   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:14:07.838836   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:14:07.838918   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:14:07.839069   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:14:07.839126   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:14:07.839180   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839369   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839450   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839654   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839779   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840008   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840076   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840322   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840380   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840571   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840594   66615 kubeadm.go:309] 
	I0429 20:14:07.840637   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:14:07.840673   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:14:07.840682   66615 kubeadm.go:309] 
	I0429 20:14:07.840715   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:14:07.840745   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:14:07.840844   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:14:07.840857   66615 kubeadm.go:309] 
	I0429 20:14:07.840969   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:14:07.841022   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:14:07.841073   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:14:07.841083   66615 kubeadm.go:309] 
	I0429 20:14:07.841184   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:14:07.841315   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:14:07.841325   66615 kubeadm.go:309] 
	I0429 20:14:07.841454   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:14:07.841550   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:14:07.841632   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:14:07.841697   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:14:07.841760   66615 kubeadm.go:393] duration metric: took 8m1.501853767s to StartCluster
	I0429 20:14:07.841781   66615 kubeadm.go:309] 
	I0429 20:14:07.841800   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:14:07.841853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:14:07.898194   66615 cri.go:89] found id: ""
	I0429 20:14:07.898227   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.898237   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:14:07.898244   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:14:07.898316   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:14:07.938873   66615 cri.go:89] found id: ""
	I0429 20:14:07.938903   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.938914   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:14:07.938921   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:14:07.938979   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:14:07.980523   66615 cri.go:89] found id: ""
	I0429 20:14:07.980551   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.980559   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:14:07.980565   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:14:07.980612   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:14:08.021334   66615 cri.go:89] found id: ""
	I0429 20:14:08.021366   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.021377   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:14:08.021389   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:14:08.021446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:14:08.060598   66615 cri.go:89] found id: ""
	I0429 20:14:08.060636   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.060648   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:14:08.060655   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:14:08.060716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:14:08.101689   66615 cri.go:89] found id: ""
	I0429 20:14:08.101715   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.101723   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:14:08.101729   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:14:08.101786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:14:08.143295   66615 cri.go:89] found id: ""
	I0429 20:14:08.143333   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.143344   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:14:08.143351   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:14:08.143408   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:14:08.190555   66615 cri.go:89] found id: ""
	I0429 20:14:08.190585   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.190597   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:14:08.190609   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:14:08.190624   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:14:08.251830   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:14:08.251870   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:14:08.306512   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:14:08.306554   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:14:08.323258   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:14:08.323283   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:14:08.405539   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:14:08.405568   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:14:08.405583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0429 20:14:08.514288   66615 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0429 20:14:08.514344   66615 out.go:239] * 
	W0429 20:14:08.514431   66615 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.514465   66615 out.go:239] * 
	W0429 20:14:08.515399   66615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:14:08.518578   66615 out.go:177] 
	W0429 20:14:08.519725   66615 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.519782   66615 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0429 20:14:08.519816   66615 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0429 20:14:08.521068   66615 out.go:177] 
	
	
	==> CRI-O <==
	Apr 29 20:20:14 no-preload-456788 crio[729]: time="2024-04-29 20:20:14.990478175Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:72ceac298eb0890d775ddb4eac2119401c8463dcd154f79f99c4532862f3f2e1,Verbose:false,}" file="otel-collector/interceptors.go:62" id=870165d7-ef21-48b6-948e-aa53717519d1 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 29 20:20:14 no-preload-456788 crio[729]: time="2024-04-29 20:20:14.990671355Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:72ceac298eb0890d775ddb4eac2119401c8463dcd154f79f99c4532862f3f2e1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1714421448815945930,StartedAt:1714421448926333324,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205865ab9386e0544ce94281b335d3fa,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/205865ab9386e0544ce94281b335d3fa/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/205865ab9386e0544ce94281b335d3fa/containers/kube-controller-manager/ed212024,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE
,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-no-preload-456788_205865ab9386e0544ce94281b335d3fa/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetM
ems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=870165d7-ef21-48b6-948e-aa53717519d1 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 29 20:20:14 no-preload-456788 crio[729]: time="2024-04-29 20:20:14.991292706Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0f235fbb4c2c97d173f9b1dd90f7c095c5e1b4a857f16f175edd51e9df2e1f13,Verbose:false,}" file="otel-collector/interceptors.go:62" id=68e4d826-cfc3-4f43-8daf-7cf773ae3ffc name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 29 20:20:14 no-preload-456788 crio[729]: time="2024-04-29 20:20:14.991412639Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0f235fbb4c2c97d173f9b1dd90f7c095c5e1b4a857f16f175edd51e9df2e1f13,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1714421448739475338,StartedAt:1714421448860981116,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3448a18c94c0d03ef9134e75fc8da576,},Annotations:map[string]string{io.kubernetes.container.hash: 5612cf45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/3448a18c94c0d03ef9134e75fc8da576/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/3448a18c94c0d03ef9134e75fc8da576/containers/kube-apiserver/7aed5317,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Contain
erPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-no-preload-456788_3448a18c94c0d03ef9134e75fc8da576/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=68e4d826-cfc3-4f43-8daf-7cf773ae3ffc name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.005831077Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f11cd295-310b-40e5-9aa1-1b7251e19e57 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.005889090Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f11cd295-310b-40e5-9aa1-1b7251e19e57 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.008296695Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e28f1fb8-522a-4d80-b72c-8775934e761d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.008663785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422015008644225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e28f1fb8-522a-4d80-b72c-8775934e761d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.009477987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8417e77-b6af-4ea7-a570-f68384a5de4b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.009528803Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8417e77-b6af-4ea7-a570-f68384a5de4b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.009840372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d11f63276693766369907ad330504ed69597491d538cd9b5a329f53e0905107,PodSandboxId:fdd54e79fdd15614a68e32539580048e00223a09cd3114c4bf69b2737edb703d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421471129045087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd1c4813-8889-4f21-b21e-6007eaa163a6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d1a81fa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229a76fc962ea694d3ec4ef1d263c0f74884241f8f6d47bec60d8fa1273589d7,PodSandboxId:88d438c8c8de00704b3928c035bbe2d47c1ef1a06078688142c5aaadfd5a328a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421470274085527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pvhwv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38ee7b3-53fe-4609-9b2b-000f55de5d5c,},Annotations:map[string]string{io.kubernetes.container.hash: 749b4823,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5fc28aade0c3f32cd2a7a12b42b5608b169783d6272faea610cca67ee353b6,PodSandboxId:2f7a61fdbc3d8e688c5bb769ed501cdbda7575ccf57b66dc3b04c63d35cd656f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421469817009543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hcfbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0
b53824-478e-4523-ada4-1cd7ba306c81,},Annotations:map[string]string{io.kubernetes.container.hash: a08c04a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abda1b10e157741997a1ff6231a8d94bae873a8dc8ed5f4f50bcf25058f9ee0d,PodSandboxId:59f981fd24e8e92afd0fe36277fdbdeb4babf75c2e4be2bdde65e7ccd17946dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1714421469373027116,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6m95d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7,},Annotations:map[string]string{io.kubernetes.container.hash: f7b0245a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d547c066386359c26f32a9b3cdfeede872d97f68e253371e03cf4703b6fb2fa,PodSandboxId:488d6ae14da92daa58faf06f5f7bf8ce7a3a353d53ddd0b6f9fe844b52e45d85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421448799909861,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a08ac4ebc8433e053b376f035d670b,},Annotations:map[string]string{io.kubernetes.container.hash: 5d2686,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aa6b64ca6ded6d70a1edc0d5698398537da41a5a6f57ce52c6fd909454eb8ca,PodSandboxId:31c57455e70d7f5d16a47f64a012beb830434cacf5e70f328b54fc0cb61ff641,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421448737556685,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c073a5401d1f6a9264443a37232e7b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72ceac298eb0890d775ddb4eac2119401c8463dcd154f79f99c4532862f3f2e1,PodSandboxId:29122b6de9c841653ddbb98be21ac4f2be0a779ecf87f4a55f9490190caa306a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421448663523381,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205865ab9386e0544ce94281b335d3fa,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f235fbb4c2c97d173f9b1dd90f7c095c5e1b4a857f16f175edd51e9df2e1f13,PodSandboxId:6ab26349fa514b473a3ed37a595a92433d37a0e37b3976677189303140c4a97b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421448669420404,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3448a18c94c0d03ef9134e75fc8da576,},Annotations:map[string]string{io.kubernetes.container.hash: 5612cf45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8417e77-b6af-4ea7-a570-f68384a5de4b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.029551895Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=5c7a7b56-19ff-4f34-838a-2484b3b34577 name=/runtime.v1.RuntimeService/Status
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.029950572Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=5c7a7b56-19ff-4f34-838a-2484b3b34577 name=/runtime.v1.RuntimeService/Status
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.059484232Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2aaa57f5-59c7-4484-b63a-58aa17825879 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.059801443Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2aaa57f5-59c7-4484-b63a-58aa17825879 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.061354273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d4f4d73-ddaa-42d8-97af-7e97aa45d195 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.061821009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422015061782851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d4f4d73-ddaa-42d8-97af-7e97aa45d195 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.062633326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=caa3f5f6-4ef2-42d6-8737-d486d223bd56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.062737219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=caa3f5f6-4ef2-42d6-8737-d486d223bd56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.063159955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d11f63276693766369907ad330504ed69597491d538cd9b5a329f53e0905107,PodSandboxId:fdd54e79fdd15614a68e32539580048e00223a09cd3114c4bf69b2737edb703d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421471129045087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd1c4813-8889-4f21-b21e-6007eaa163a6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d1a81fa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229a76fc962ea694d3ec4ef1d263c0f74884241f8f6d47bec60d8fa1273589d7,PodSandboxId:88d438c8c8de00704b3928c035bbe2d47c1ef1a06078688142c5aaadfd5a328a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421470274085527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pvhwv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38ee7b3-53fe-4609-9b2b-000f55de5d5c,},Annotations:map[string]string{io.kubernetes.container.hash: 749b4823,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5fc28aade0c3f32cd2a7a12b42b5608b169783d6272faea610cca67ee353b6,PodSandboxId:2f7a61fdbc3d8e688c5bb769ed501cdbda7575ccf57b66dc3b04c63d35cd656f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421469817009543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hcfbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0
b53824-478e-4523-ada4-1cd7ba306c81,},Annotations:map[string]string{io.kubernetes.container.hash: a08c04a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abda1b10e157741997a1ff6231a8d94bae873a8dc8ed5f4f50bcf25058f9ee0d,PodSandboxId:59f981fd24e8e92afd0fe36277fdbdeb4babf75c2e4be2bdde65e7ccd17946dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1714421469373027116,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6m95d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7,},Annotations:map[string]string{io.kubernetes.container.hash: f7b0245a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d547c066386359c26f32a9b3cdfeede872d97f68e253371e03cf4703b6fb2fa,PodSandboxId:488d6ae14da92daa58faf06f5f7bf8ce7a3a353d53ddd0b6f9fe844b52e45d85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421448799909861,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a08ac4ebc8433e053b376f035d670b,},Annotations:map[string]string{io.kubernetes.container.hash: 5d2686,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aa6b64ca6ded6d70a1edc0d5698398537da41a5a6f57ce52c6fd909454eb8ca,PodSandboxId:31c57455e70d7f5d16a47f64a012beb830434cacf5e70f328b54fc0cb61ff641,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421448737556685,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c073a5401d1f6a9264443a37232e7b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72ceac298eb0890d775ddb4eac2119401c8463dcd154f79f99c4532862f3f2e1,PodSandboxId:29122b6de9c841653ddbb98be21ac4f2be0a779ecf87f4a55f9490190caa306a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421448663523381,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205865ab9386e0544ce94281b335d3fa,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f235fbb4c2c97d173f9b1dd90f7c095c5e1b4a857f16f175edd51e9df2e1f13,PodSandboxId:6ab26349fa514b473a3ed37a595a92433d37a0e37b3976677189303140c4a97b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421448669420404,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3448a18c94c0d03ef9134e75fc8da576,},Annotations:map[string]string{io.kubernetes.container.hash: 5612cf45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=caa3f5f6-4ef2-42d6-8737-d486d223bd56 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.087631628Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=d3cabecd-6025-490d-9e69-8dedffdbc38d name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.088033154Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:88a287948753e98ec3484d19c067cc87e8d2bdfff906202e65cab3161ac6c0dc,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-sxgwr,Uid:046d28fe-d51e-43ba-9550-d1d7e33d9d84,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421471066016097,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-sxgwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 046d28fe-d51e-43ba-9550-d1d7e33d9d84,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T20:11:10.749639074Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fdd54e79fdd15614a68e32539580048e00223a09cd3114c4bf69b2737edb703d,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:fd1c4813-8889-4f21-b21e-6007eaa163a6,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421470870264993,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd1c4813-8889-4f21-b21e-6007eaa163a6,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-29T20:11:10.557108596Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:59f981fd24e8e92afd0fe36277fdbdeb4babf75c2e4be2bdde65e7ccd17946dd,Metadata:&PodSandboxMetadata{Name:kube-proxy-6m95d,Uid:25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421469133762556,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6m95d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T20:11:08.505631801Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88d438c8c8de00704b3928c035bbe2d47c1ef1a06078688142c5aaadfd5a328a,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-pvhwv,Uid
:f38ee7b3-53fe-4609-9b2b-000f55de5d5c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421469121980820,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-pvhwv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38ee7b3-53fe-4609-9b2b-000f55de5d5c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T20:11:08.787265936Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f7a61fdbc3d8e688c5bb769ed501cdbda7575ccf57b66dc3b04c63d35cd656f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-hcfbq,Uid:c0b53824-478e-4523-ada4-1cd7ba306c81,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421469035043347,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-hcfbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0b53824-478e-4523-ada4-1cd7ba306c81,k8s-app: kube-dns,pod-templat
e-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T20:11:08.711980174Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:31c57455e70d7f5d16a47f64a012beb830434cacf5e70f328b54fc0cb61ff641,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-456788,Uid:62c073a5401d1f6a9264443a37232e7b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421448457966360,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c073a5401d1f6a9264443a37232e7b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 62c073a5401d1f6a9264443a37232e7b,kubernetes.io/config.seen: 2024-04-29T20:10:47.986005879Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:488d6ae14da92daa58faf06f5f7bf8ce7a3a353d53ddd0b6f9fe844b52e45d85,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-4
56788,Uid:14a08ac4ebc8433e053b376f035d670b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421448456774663,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a08ac4ebc8433e053b376f035d670b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.235:2379,kubernetes.io/config.hash: 14a08ac4ebc8433e053b376f035d670b,kubernetes.io/config.seen: 2024-04-29T20:10:47.985999109Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:29122b6de9c841653ddbb98be21ac4f2be0a779ecf87f4a55f9490190caa306a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-456788,Uid:205865ab9386e0544ce94281b335d3fa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421448436797904,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: kube-controller-manager-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205865ab9386e0544ce94281b335d3fa,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 205865ab9386e0544ce94281b335d3fa,kubernetes.io/config.seen: 2024-04-29T20:10:47.986004951Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6ab26349fa514b473a3ed37a595a92433d37a0e37b3976677189303140c4a97b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-456788,Uid:3448a18c94c0d03ef9134e75fc8da576,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421448433862480,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3448a18c94c0d03ef9134e75fc8da576,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.235:844
3,kubernetes.io/config.hash: 3448a18c94c0d03ef9134e75fc8da576,kubernetes.io/config.seen: 2024-04-29T20:10:47.986003623Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d3cabecd-6025-490d-9e69-8dedffdbc38d name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.090972351Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60a6384b-b6cd-4f66-b7ac-2f95a72686de name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.091067971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60a6384b-b6cd-4f66-b7ac-2f95a72686de name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:20:15 no-preload-456788 crio[729]: time="2024-04-29 20:20:15.091448896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d11f63276693766369907ad330504ed69597491d538cd9b5a329f53e0905107,PodSandboxId:fdd54e79fdd15614a68e32539580048e00223a09cd3114c4bf69b2737edb703d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421471129045087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd1c4813-8889-4f21-b21e-6007eaa163a6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d1a81fa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229a76fc962ea694d3ec4ef1d263c0f74884241f8f6d47bec60d8fa1273589d7,PodSandboxId:88d438c8c8de00704b3928c035bbe2d47c1ef1a06078688142c5aaadfd5a328a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421470274085527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pvhwv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38ee7b3-53fe-4609-9b2b-000f55de5d5c,},Annotations:map[string]string{io.kubernetes.container.hash: 749b4823,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5fc28aade0c3f32cd2a7a12b42b5608b169783d6272faea610cca67ee353b6,PodSandboxId:2f7a61fdbc3d8e688c5bb769ed501cdbda7575ccf57b66dc3b04c63d35cd656f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421469817009543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hcfbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0
b53824-478e-4523-ada4-1cd7ba306c81,},Annotations:map[string]string{io.kubernetes.container.hash: a08c04a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abda1b10e157741997a1ff6231a8d94bae873a8dc8ed5f4f50bcf25058f9ee0d,PodSandboxId:59f981fd24e8e92afd0fe36277fdbdeb4babf75c2e4be2bdde65e7ccd17946dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1714421469373027116,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6m95d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7,},Annotations:map[string]string{io.kubernetes.container.hash: f7b0245a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d547c066386359c26f32a9b3cdfeede872d97f68e253371e03cf4703b6fb2fa,PodSandboxId:488d6ae14da92daa58faf06f5f7bf8ce7a3a353d53ddd0b6f9fe844b52e45d85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421448799909861,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a08ac4ebc8433e053b376f035d670b,},Annotations:map[string]string{io.kubernetes.container.hash: 5d2686,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aa6b64ca6ded6d70a1edc0d5698398537da41a5a6f57ce52c6fd909454eb8ca,PodSandboxId:31c57455e70d7f5d16a47f64a012beb830434cacf5e70f328b54fc0cb61ff641,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421448737556685,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c073a5401d1f6a9264443a37232e7b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72ceac298eb0890d775ddb4eac2119401c8463dcd154f79f99c4532862f3f2e1,PodSandboxId:29122b6de9c841653ddbb98be21ac4f2be0a779ecf87f4a55f9490190caa306a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421448663523381,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205865ab9386e0544ce94281b335d3fa,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f235fbb4c2c97d173f9b1dd90f7c095c5e1b4a857f16f175edd51e9df2e1f13,PodSandboxId:6ab26349fa514b473a3ed37a595a92433d37a0e37b3976677189303140c4a97b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421448669420404,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3448a18c94c0d03ef9134e75fc8da576,},Annotations:map[string]string{io.kubernetes.container.hash: 5612cf45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60a6384b-b6cd-4f66-b7ac-2f95a72686de name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7d11f63276693       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   fdd54e79fdd15       storage-provisioner
	229a76fc962ea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   88d438c8c8de0       coredns-7db6d8ff4d-pvhwv
	5c5fc28aade0c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   2f7a61fdbc3d8       coredns-7db6d8ff4d-hcfbq
	abda1b10e1577       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   59f981fd24e8e       kube-proxy-6m95d
	6d547c0663863       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   488d6ae14da92       etcd-no-preload-456788
	8aa6b64ca6ded       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   31c57455e70d7       kube-scheduler-no-preload-456788
	0f235fbb4c2c9       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   6ab26349fa514       kube-apiserver-no-preload-456788
	72ceac298eb08       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   29122b6de9c84       kube-controller-manager-no-preload-456788
	
	
	==> coredns [229a76fc962ea694d3ec4ef1d263c0f74884241f8f6d47bec60d8fa1273589d7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [5c5fc28aade0c3f32cd2a7a12b42b5608b169783d6272faea610cca67ee353b6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-456788
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-456788
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=no-preload-456788
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T20_10_55_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:10:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-456788
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:20:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:16:21 +0000   Mon, 29 Apr 2024 20:10:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:16:21 +0000   Mon, 29 Apr 2024 20:10:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:16:21 +0000   Mon, 29 Apr 2024 20:10:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:16:21 +0000   Mon, 29 Apr 2024 20:11:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    no-preload-456788
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef89b0258cca4ea6b20778f725a369a5
	  System UUID:                ef89b025-8cca-4ea6-b207-78f725a369a5
	  Boot ID:                    0cc4a78e-ba7c-4855-80b5-3987fa0a2c2a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-hcfbq                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-pvhwv                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-no-preload-456788                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-no-preload-456788             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-no-preload-456788    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-6m95d                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-no-preload-456788             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-sxgwr              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  Starting                 9m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node no-preload-456788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node no-preload-456788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node no-preload-456788 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node no-preload-456788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node no-preload-456788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node no-preload-456788 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s                  kubelet          Node no-preload-456788 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m10s                  kubelet          Node no-preload-456788 status is now: NodeReady
	  Normal  RegisteredNode           9m8s                   node-controller  Node no-preload-456788 event: Registered Node no-preload-456788 in Controller
	
	
	==> dmesg <==
	[  +0.042923] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.629509] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.464896] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.729155] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.705460] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.061878] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070433] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.207964] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.156758] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.352328] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[ +16.870529] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	[  +0.063084] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.641919] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[Apr29 20:06] kauditd_printk_skb: 100 callbacks suppressed
	[  +7.380681] kauditd_printk_skb: 52 callbacks suppressed
	[  +7.486488] kauditd_printk_skb: 24 callbacks suppressed
	[Apr29 20:10] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.939334] systemd-fstab-generator[4069]: Ignoring "noauto" option for root device
	[  +4.726296] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.356736] systemd-fstab-generator[4396]: Ignoring "noauto" option for root device
	[Apr29 20:11] systemd-fstab-generator[4623]: Ignoring "noauto" option for root device
	[  +0.128995] kauditd_printk_skb: 14 callbacks suppressed
	[Apr29 20:12] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [6d547c066386359c26f32a9b3cdfeede872d97f68e253371e03cf4703b6fb2fa] <==
	{"level":"info","ts":"2024-04-29T20:10:49.263651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 switched to configuration voters=(18354048925659093432)"}
	{"level":"info","ts":"2024-04-29T20:10:49.264084Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1b3c53dd134e6187","local-member-id":"feb6ae41040cd9b8","added-peer-id":"feb6ae41040cd9b8","added-peer-peer-urls":["https://192.168.39.235:2380"]}
	{"level":"info","ts":"2024-04-29T20:10:49.264692Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T20:10:49.26495Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"feb6ae41040cd9b8","initial-advertise-peer-urls":["https://192.168.39.235:2380"],"listen-peer-urls":["https://192.168.39.235:2380"],"advertise-client-urls":["https://192.168.39.235:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.235:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T20:10:49.265002Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T20:10:49.265124Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-04-29T20:10:49.265163Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-04-29T20:10:49.703774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T20:10:49.703888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T20:10:49.70392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 received MsgPreVoteResp from feb6ae41040cd9b8 at term 1"}
	{"level":"info","ts":"2024-04-29T20:10:49.703932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T20:10:49.703937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 received MsgVoteResp from feb6ae41040cd9b8 at term 2"}
	{"level":"info","ts":"2024-04-29T20:10:49.703946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became leader at term 2"}
	{"level":"info","ts":"2024-04-29T20:10:49.703953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: feb6ae41040cd9b8 elected leader feb6ae41040cd9b8 at term 2"}
	{"level":"info","ts":"2024-04-29T20:10:49.706445Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"feb6ae41040cd9b8","local-member-attributes":"{Name:no-preload-456788 ClientURLs:[https://192.168.39.235:2379]}","request-path":"/0/members/feb6ae41040cd9b8/attributes","cluster-id":"1b3c53dd134e6187","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T20:10:49.706668Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:10:49.710526Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:10:49.711016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:10:49.721277Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T20:10:49.728421Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T20:10:49.72735Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.235:2379"}
	{"level":"info","ts":"2024-04-29T20:10:49.727743Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1b3c53dd134e6187","local-member-id":"feb6ae41040cd9b8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:10:49.728754Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:10:49.728835Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:10:49.735635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:20:15 up 14 min,  0 users,  load average: 0.11, 0.27, 0.24
	Linux no-preload-456788 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f235fbb4c2c97d173f9b1dd90f7c095c5e1b4a857f16f175edd51e9df2e1f13] <==
	I0429 20:14:11.650270       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:15:51.665707       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:15:51.666171       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0429 20:15:52.666626       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:15:52.666705       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:15:52.666714       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:15:52.666778       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:15:52.666879       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:15:52.668745       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:16:52.667700       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:16:52.668066       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:16:52.668118       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:16:52.669912       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:16:52.669976       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:16:52.670011       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:18:52.669295       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:18:52.669679       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:18:52.669718       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:18:52.670375       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:18:52.670497       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:18:52.671718       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [72ceac298eb0890d775ddb4eac2119401c8463dcd154f79f99c4532862f3f2e1] <==
	I0429 20:14:38.289102       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:15:07.845168       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:15:08.299635       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:15:37.851089       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:15:38.309496       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:16:07.857639       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:16:08.320138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:16:37.864148       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:16:38.329095       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:17:07.870361       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:17:08.340081       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0429 20:17:08.791998       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="253.414µs"
	I0429 20:17:23.791127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="131.947µs"
	E0429 20:17:37.875667       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:17:38.348274       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:18:07.882810       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:18:08.357096       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:18:37.889426       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:18:38.365559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:19:07.895331       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:19:08.375339       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:19:37.900535       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:19:38.384395       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:20:07.906003       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:20:08.393677       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [abda1b10e157741997a1ff6231a8d94bae873a8dc8ed5f4f50bcf25058f9ee0d] <==
	I0429 20:11:09.854428       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:11:09.888525       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.235"]
	I0429 20:11:10.248806       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:11:10.248856       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:11:10.248874       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:11:10.252701       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:11:10.252892       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:11:10.252907       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:11:10.262121       1 config.go:192] "Starting service config controller"
	I0429 20:11:10.262151       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:11:10.263396       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:11:10.263458       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:11:10.264243       1 config.go:319] "Starting node config controller"
	I0429 20:11:10.264254       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:11:10.362821       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:11:10.371405       1 shared_informer.go:320] Caches are synced for node config
	I0429 20:11:10.371456       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8aa6b64ca6ded6d70a1edc0d5698398537da41a5a6f57ce52c6fd909454eb8ca] <==
	W0429 20:10:52.546604       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 20:10:52.546725       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 20:10:52.783351       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 20:10:52.786502       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 20:10:52.834093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 20:10:52.834155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 20:10:52.867383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 20:10:52.867588       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 20:10:52.881495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 20:10:52.881553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 20:10:52.984440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 20:10:52.984590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 20:10:53.059497       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 20:10:53.059737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 20:10:53.064449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 20:10:53.064832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 20:10:53.066662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 20:10:53.066762       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 20:10:53.117540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 20:10:53.117994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 20:10:53.117825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 20:10:53.118114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 20:10:53.154239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 20:10:53.154294       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 20:10:55.604810       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:17:54 no-preload-456788 kubelet[4403]: E0429 20:17:54.828686    4403 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:17:54 no-preload-456788 kubelet[4403]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:17:54 no-preload-456788 kubelet[4403]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:17:54 no-preload-456788 kubelet[4403]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:17:54 no-preload-456788 kubelet[4403]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:18:04 no-preload-456788 kubelet[4403]: E0429 20:18:04.774431    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:18:19 no-preload-456788 kubelet[4403]: E0429 20:18:19.774749    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:18:34 no-preload-456788 kubelet[4403]: E0429 20:18:34.773560    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:18:47 no-preload-456788 kubelet[4403]: E0429 20:18:47.772844    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:18:54 no-preload-456788 kubelet[4403]: E0429 20:18:54.832491    4403 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:18:54 no-preload-456788 kubelet[4403]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:18:54 no-preload-456788 kubelet[4403]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:18:54 no-preload-456788 kubelet[4403]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:18:54 no-preload-456788 kubelet[4403]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:19:01 no-preload-456788 kubelet[4403]: E0429 20:19:01.773609    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:19:13 no-preload-456788 kubelet[4403]: E0429 20:19:13.773509    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:19:25 no-preload-456788 kubelet[4403]: E0429 20:19:25.773243    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:19:39 no-preload-456788 kubelet[4403]: E0429 20:19:39.773272    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:19:53 no-preload-456788 kubelet[4403]: E0429 20:19:53.773604    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:19:54 no-preload-456788 kubelet[4403]: E0429 20:19:54.828498    4403 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:19:54 no-preload-456788 kubelet[4403]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:19:54 no-preload-456788 kubelet[4403]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:19:54 no-preload-456788 kubelet[4403]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:19:54 no-preload-456788 kubelet[4403]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:20:04 no-preload-456788 kubelet[4403]: E0429 20:20:04.773353    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	
	
	==> storage-provisioner [7d11f63276693766369907ad330504ed69597491d538cd9b5a329f53e0905107] <==
	I0429 20:11:11.302162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 20:11:11.325107       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 20:11:11.325949       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 20:11:11.347805       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 20:11:11.348910       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-456788_a9e008ad-f36b-43f8-a4f8-c7bbb53e2367!
	I0429 20:11:11.361054       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"19f42fe4-9eff-437d-bb89-d4580910f858", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-456788_a9e008ad-f36b-43f8-a4f8-c7bbb53e2367 became leader
	I0429 20:11:11.451364       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-456788_a9e008ad-f36b-43f8-a4f8-c7bbb53e2367!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-456788 -n no-preload-456788
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-456788 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-sxgwr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-456788 describe pod metrics-server-569cc877fc-sxgwr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-456788 describe pod metrics-server-569cc877fc-sxgwr: exit status 1 (61.230713ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-sxgwr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-456788 describe pod metrics-server-569cc877fc-sxgwr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0429 20:12:48.914842   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 20:14:00.894203   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-161370 -n embed-certs-161370
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-29 20:21:04.756254201 +0000 UTC m=+6114.403629338
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-161370 -n embed-certs-161370
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-161370 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-161370 logs -n 25: (2.296048202s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:55 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| ssh     | cert-options-437743 ssh                                | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-437743 -- sudo                         | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-437743                                 | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-161370            | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-456788             | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-193781 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | disable-driver-mounts-193781                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 20:00 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-866143  | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-161370                 | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-919612        | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-456788                  | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:01 UTC | 29 Apr 24 20:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-919612             | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-866143       | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:10 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:02:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:02:45.502823   66875 out.go:291] Setting OutFile to fd 1 ...
	I0429 20:02:45.503073   66875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:45.503084   66875 out.go:304] Setting ErrFile to fd 2...
	I0429 20:02:45.503089   66875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:45.503272   66875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 20:02:45.503808   66875 out.go:298] Setting JSON to false
	I0429 20:02:45.504681   66875 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6263,"bootTime":1714414702,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 20:02:45.504736   66875 start.go:139] virtualization: kvm guest
	I0429 20:02:45.507344   66875 out.go:177] * [default-k8s-diff-port-866143] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 20:02:45.508715   66875 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:02:45.508745   66875 notify.go:220] Checking for updates...
	I0429 20:02:45.510093   66875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:02:45.512200   66875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:02:45.513622   66875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:02:45.514915   66875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 20:02:45.516228   66875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:02:45.517923   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:02:45.518366   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:45.518446   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:45.533484   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I0429 20:02:45.533901   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:45.534427   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:02:45.534448   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:45.534822   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:45.535013   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:02:45.535292   66875 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:02:45.535595   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:45.535639   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:45.551065   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0429 20:02:45.551469   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:45.551906   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:02:45.551928   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:45.552239   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:45.552451   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:02:45.584714   66875 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 20:02:45.586089   66875 start.go:297] selected driver: kvm2
	I0429 20:02:45.586117   66875 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:45.586250   66875 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:02:45.587043   66875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:45.587136   66875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 20:02:45.601799   66875 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 20:02:45.602171   66875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:02:45.602246   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:02:45.602265   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:02:45.602323   66875 start.go:340] cluster config:
	{Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:45.602444   66875 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:45.605081   66875 out.go:177] * Starting "default-k8s-diff-port-866143" primary control-plane node in "default-k8s-diff-port-866143" cluster
	I0429 20:02:42.794291   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:45.866333   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:45.606536   66875 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:02:45.606590   66875 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 20:02:45.606602   66875 cache.go:56] Caching tarball of preloaded images
	I0429 20:02:45.606687   66875 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 20:02:45.606704   66875 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 20:02:45.606799   66875 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:02:45.606986   66875 start.go:360] acquireMachinesLock for default-k8s-diff-port-866143: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:02:51.946332   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:55.018269   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:01.098329   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:04.170389   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:10.250316   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:13.322292   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:19.402290   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:22.474356   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:28.554348   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:31.626416   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:37.706282   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:40.778321   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:46.858318   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:49.930321   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:56.010331   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:59.082336   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:05.162299   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:08.234328   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:14.314352   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:17.386337   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:23.466350   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:26.538284   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:32.618297   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:35.690319   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:41.770372   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:44.842280   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:50.922320   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:53.994336   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:00.074389   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:03.146353   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:09.226369   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:12.298407   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:15.302828   66218 start.go:364] duration metric: took 4m7.483402316s to acquireMachinesLock for "no-preload-456788"
	I0429 20:05:15.302889   66218 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:15.302896   66218 fix.go:54] fixHost starting: 
	I0429 20:05:15.303301   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:15.303337   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:15.319582   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I0429 20:05:15.320057   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:15.320597   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:05:15.320620   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:15.321017   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:15.321272   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:15.321472   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:05:15.323137   66218 fix.go:112] recreateIfNeeded on no-preload-456788: state=Stopped err=<nil>
	I0429 20:05:15.323171   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	W0429 20:05:15.323346   66218 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:15.325520   66218 out.go:177] * Restarting existing kvm2 VM for "no-preload-456788" ...
	I0429 20:05:15.327122   66218 main.go:141] libmachine: (no-preload-456788) Calling .Start
	I0429 20:05:15.327314   66218 main.go:141] libmachine: (no-preload-456788) Ensuring networks are active...
	I0429 20:05:15.328136   66218 main.go:141] libmachine: (no-preload-456788) Ensuring network default is active
	I0429 20:05:15.328437   66218 main.go:141] libmachine: (no-preload-456788) Ensuring network mk-no-preload-456788 is active
	I0429 20:05:15.328771   66218 main.go:141] libmachine: (no-preload-456788) Getting domain xml...
	I0429 20:05:15.329442   66218 main.go:141] libmachine: (no-preload-456788) Creating domain...
	I0429 20:05:16.534970   66218 main.go:141] libmachine: (no-preload-456788) Waiting to get IP...
	I0429 20:05:16.536019   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:16.536375   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:16.536444   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:16.536369   67416 retry.go:31] will retry after 240.743093ms: waiting for machine to come up
	I0429 20:05:16.779123   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:16.779623   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:16.779659   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:16.779558   67416 retry.go:31] will retry after 355.595109ms: waiting for machine to come up
	I0429 20:05:17.137145   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:17.137512   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:17.137542   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:17.137480   67416 retry.go:31] will retry after 347.905643ms: waiting for machine to come up
	I0429 20:05:17.487174   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:17.487566   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:17.487597   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:17.487543   67416 retry.go:31] will retry after 547.016094ms: waiting for machine to come up
	I0429 20:05:15.300221   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:15.300278   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:05:15.300613   65980 buildroot.go:166] provisioning hostname "embed-certs-161370"
	I0429 20:05:15.300652   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:05:15.300910   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:05:15.302677   65980 machine.go:97] duration metric: took 4m37.41104152s to provisionDockerMachine
	I0429 20:05:15.302722   65980 fix.go:56] duration metric: took 4m37.432092484s for fixHost
	I0429 20:05:15.302728   65980 start.go:83] releasing machines lock for "embed-certs-161370", held for 4m37.432113341s
	W0429 20:05:15.302753   65980 start.go:713] error starting host: provision: host is not running
	W0429 20:05:15.302871   65980 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0429 20:05:15.302882   65980 start.go:728] Will try again in 5 seconds ...
	I0429 20:05:18.036617   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:18.037042   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:18.037104   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:18.037025   67416 retry.go:31] will retry after 465.100134ms: waiting for machine to come up
	I0429 20:05:18.503846   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:18.504326   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:18.504352   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:18.504283   67416 retry.go:31] will retry after 672.007195ms: waiting for machine to come up
	I0429 20:05:19.178173   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:19.178570   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:19.178604   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:19.178516   67416 retry.go:31] will retry after 744.052058ms: waiting for machine to come up
	I0429 20:05:19.924561   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:19.925029   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:19.925060   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:19.925002   67416 retry.go:31] will retry after 1.06511003s: waiting for machine to come up
	I0429 20:05:20.991584   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:20.992015   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:20.992046   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:20.991980   67416 retry.go:31] will retry after 1.677065765s: waiting for machine to come up
	I0429 20:05:22.671760   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:22.672123   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:22.672149   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:22.672085   67416 retry.go:31] will retry after 1.979191189s: waiting for machine to come up
	I0429 20:05:20.303964   65980 start.go:360] acquireMachinesLock for embed-certs-161370: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:05:24.654246   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:24.654711   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:24.654735   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:24.654663   67416 retry.go:31] will retry after 1.839551716s: waiting for machine to come up
	I0429 20:05:26.496511   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:26.496982   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:26.497017   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:26.496939   67416 retry.go:31] will retry after 3.505979368s: waiting for machine to come up
	I0429 20:05:30.006590   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:30.006916   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:30.006951   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:30.006871   67416 retry.go:31] will retry after 3.811785899s: waiting for machine to come up
	I0429 20:05:35.155600   66615 start.go:364] duration metric: took 3m25.093405289s to acquireMachinesLock for "old-k8s-version-919612"
	I0429 20:05:35.155655   66615 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:35.155661   66615 fix.go:54] fixHost starting: 
	I0429 20:05:35.155999   66615 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:35.156034   66615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:35.173332   66615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0429 20:05:35.173754   66615 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:35.174261   66615 main.go:141] libmachine: Using API Version  1
	I0429 20:05:35.174294   66615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:35.174602   66615 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:35.174797   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:35.174987   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetState
	I0429 20:05:35.176453   66615 fix.go:112] recreateIfNeeded on old-k8s-version-919612: state=Stopped err=<nil>
	I0429 20:05:35.176478   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	W0429 20:05:35.176647   66615 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:35.178966   66615 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-919612" ...
	I0429 20:05:33.823293   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.823787   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has current primary IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.823806   66218 main.go:141] libmachine: (no-preload-456788) Found IP for machine: 192.168.39.235
	I0429 20:05:33.823830   66218 main.go:141] libmachine: (no-preload-456788) Reserving static IP address...
	I0429 20:05:33.824243   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "no-preload-456788", mac: "52:54:00:15:ae:18", ip: "192.168.39.235"} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.824279   66218 main.go:141] libmachine: (no-preload-456788) DBG | skip adding static IP to network mk-no-preload-456788 - found existing host DHCP lease matching {name: "no-preload-456788", mac: "52:54:00:15:ae:18", ip: "192.168.39.235"}
	I0429 20:05:33.824293   66218 main.go:141] libmachine: (no-preload-456788) Reserved static IP address: 192.168.39.235
	I0429 20:05:33.824308   66218 main.go:141] libmachine: (no-preload-456788) Waiting for SSH to be available...
	I0429 20:05:33.824323   66218 main.go:141] libmachine: (no-preload-456788) DBG | Getting to WaitForSSH function...
	I0429 20:05:33.826371   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.826678   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.826711   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.826808   66218 main.go:141] libmachine: (no-preload-456788) DBG | Using SSH client type: external
	I0429 20:05:33.826836   66218 main.go:141] libmachine: (no-preload-456788) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa (-rw-------)
	I0429 20:05:33.826863   66218 main.go:141] libmachine: (no-preload-456788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:33.826876   66218 main.go:141] libmachine: (no-preload-456788) DBG | About to run SSH command:
	I0429 20:05:33.826887   66218 main.go:141] libmachine: (no-preload-456788) DBG | exit 0
	I0429 20:05:33.954275   66218 main.go:141] libmachine: (no-preload-456788) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:33.954631   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetConfigRaw
	I0429 20:05:33.955387   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:33.957827   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.958210   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.958241   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.958510   66218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/config.json ...
	I0429 20:05:33.958707   66218 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:33.958726   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:33.958952   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:33.961236   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.961535   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.961564   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.961692   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:33.961857   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:33.962015   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:33.962163   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:33.962339   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:33.962522   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:33.962533   66218 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:34.070746   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:34.070777   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.071037   66218 buildroot.go:166] provisioning hostname "no-preload-456788"
	I0429 20:05:34.071062   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.071305   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.073680   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.074016   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.074043   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.074203   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.074374   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.074513   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.074612   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.074743   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.074946   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.074960   66218 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-456788 && echo "no-preload-456788" | sudo tee /etc/hostname
	I0429 20:05:34.198256   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-456788
	
	I0429 20:05:34.198286   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.201126   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.201482   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.201521   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.201710   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.201914   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.202055   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.202219   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.202361   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.202549   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.202573   66218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-456788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-456788/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-456788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:34.324678   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:34.324710   66218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:34.324732   66218 buildroot.go:174] setting up certificates
	I0429 20:05:34.324744   66218 provision.go:84] configureAuth start
	I0429 20:05:34.324756   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.325032   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:34.327623   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.328010   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.328040   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.328149   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.330359   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.330679   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.330711   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.330811   66218 provision.go:143] copyHostCerts
	I0429 20:05:34.330865   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:34.330878   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:34.330939   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:34.331023   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:34.331031   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:34.331054   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:34.331111   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:34.331119   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:34.331148   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:34.331231   66218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.no-preload-456788 san=[127.0.0.1 192.168.39.235 localhost minikube no-preload-456788]
	I0429 20:05:34.444358   66218 provision.go:177] copyRemoteCerts
	I0429 20:05:34.444420   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:34.444445   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.447129   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.447432   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.447466   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.447623   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.447833   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.447999   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.448129   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:34.533465   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:34.561724   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:34.589229   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 20:05:34.617451   66218 provision.go:87] duration metric: took 292.691614ms to configureAuth
	I0429 20:05:34.617491   66218 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:34.617733   66218 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:05:34.617821   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.620628   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.621016   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.621047   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.621257   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.621532   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.621718   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.621892   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.622085   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.622289   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.622305   66218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:34.908031   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:34.908064   66218 machine.go:97] duration metric: took 949.343369ms to provisionDockerMachine
	I0429 20:05:34.908077   66218 start.go:293] postStartSetup for "no-preload-456788" (driver="kvm2")
	I0429 20:05:34.908091   66218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:34.908107   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:34.908452   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:34.908489   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.911574   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.912026   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.912054   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.912219   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.912428   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.912616   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.912743   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:34.997625   66218 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:35.002661   66218 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:35.002687   66218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:35.002753   66218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:35.002822   66218 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:35.002906   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:35.013292   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:35.039830   66218 start.go:296] duration metric: took 131.741312ms for postStartSetup
	I0429 20:05:35.039865   66218 fix.go:56] duration metric: took 19.736969384s for fixHost
	I0429 20:05:35.039905   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.042526   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.042877   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.042912   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.043032   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.043239   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.043416   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.043534   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.043696   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:35.043848   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:35.043858   66218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:05:35.155463   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421135.123583649
	
	I0429 20:05:35.155485   66218 fix.go:216] guest clock: 1714421135.123583649
	I0429 20:05:35.155496   66218 fix.go:229] Guest: 2024-04-29 20:05:35.123583649 +0000 UTC Remote: 2024-04-29 20:05:35.039869068 +0000 UTC m=+267.371683880 (delta=83.714581ms)
	I0429 20:05:35.155514   66218 fix.go:200] guest clock delta is within tolerance: 83.714581ms
	I0429 20:05:35.155519   66218 start.go:83] releasing machines lock for "no-preload-456788", held for 19.852645936s
	I0429 20:05:35.155544   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.155881   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:35.158682   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.159051   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.159070   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.159205   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.159793   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.159987   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.160077   66218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:35.160117   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.160216   66218 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:35.160244   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.162788   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163016   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163226   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.163250   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163372   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.163449   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.163475   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163537   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.163621   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.163723   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.163791   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.163873   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:35.163920   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.164064   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:35.248518   66218 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:35.271479   66218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:35.423324   66218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:35.430371   66218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:35.430445   66218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:35.447860   66218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:35.447886   66218 start.go:494] detecting cgroup driver to use...
	I0429 20:05:35.447949   66218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:35.464102   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:35.479069   66218 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:35.479158   66218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:35.493800   66218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:35.509284   66218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:35.627273   66218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:35.785213   66218 docker.go:233] disabling docker service ...
	I0429 20:05:35.785300   66218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:35.803584   66218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:35.818874   66218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:35.984309   66218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:36.128841   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:36.148237   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:36.172144   66218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:05:36.172243   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.191274   66218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:36.191353   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.209656   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.224474   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.238802   66218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:36.252515   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.264522   66218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.286496   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.299127   66218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:36.310702   66218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:36.310760   66218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:36.336226   66218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:36.348617   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:36.474875   66218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:36.619181   66218 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:36.619257   66218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:36.625401   66218 start.go:562] Will wait 60s for crictl version
	I0429 20:05:36.625475   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:36.630232   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:36.667005   66218 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:36.667093   66218 ssh_runner.go:195] Run: crio --version
	I0429 20:05:36.699758   66218 ssh_runner.go:195] Run: crio --version
	I0429 20:05:36.734406   66218 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:05:36.735853   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:36.738683   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:36.739019   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:36.739049   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:36.739310   66218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:36.745227   66218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:36.760124   66218 kubeadm.go:877] updating cluster {Name:no-preload-456788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:36.760238   66218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:05:36.760278   66218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:36.801389   66218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:05:36.801414   66218 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:05:36.801470   66218 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:36.801508   66218 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:36.801524   66218 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:36.801559   66218 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:36.801580   66218 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.801632   66218 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0429 20:05:36.801687   66218 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:36.801688   66218 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:36.803301   66218 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:36.803300   66218 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.803308   66218 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:36.803382   66218 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:36.956976   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.964957   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.022376   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.025860   66218 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0429 20:05:37.025893   66218 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0429 20:05:37.025915   66218 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.025924   66218 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:37.025962   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.025964   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.072629   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.072688   66218 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0429 20:05:37.072713   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:37.072741   66218 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.072791   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.118610   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0429 20:05:37.118704   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.118720   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.128364   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0429 20:05:37.128474   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:37.161350   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0429 20:05:37.165670   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0429 20:05:37.165693   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0429 20:05:37.165710   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.165754   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.165762   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0429 20:05:37.165779   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:37.167440   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:37.174173   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:37.180560   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:37.715733   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:35.180393   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .Start
	I0429 20:05:35.180576   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring networks are active...
	I0429 20:05:35.181281   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network default is active
	I0429 20:05:35.181678   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network mk-old-k8s-version-919612 is active
	I0429 20:05:35.182102   66615 main.go:141] libmachine: (old-k8s-version-919612) Getting domain xml...
	I0429 20:05:35.182867   66615 main.go:141] libmachine: (old-k8s-version-919612) Creating domain...
	I0429 20:05:36.459478   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting to get IP...
	I0429 20:05:36.460301   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.460751   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.460817   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.460706   67552 retry.go:31] will retry after 280.48781ms: waiting for machine to come up
	I0429 20:05:36.743188   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.743630   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.743658   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.743591   67552 retry.go:31] will retry after 326.238132ms: waiting for machine to come up
	I0429 20:05:37.071146   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.071576   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.071609   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.071527   67552 retry.go:31] will retry after 380.72234ms: waiting for machine to come up
	I0429 20:05:37.453967   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.454435   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.454464   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.454385   67552 retry.go:31] will retry after 593.303053ms: waiting for machine to come up
	I0429 20:05:38.049072   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.049555   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.049587   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.049500   67552 retry.go:31] will retry after 694.752524ms: waiting for machine to come up
	I0429 20:05:38.746542   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.747034   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.747065   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.747002   67552 retry.go:31] will retry after 860.161186ms: waiting for machine to come up
	I0429 20:05:39.609098   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:39.609601   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:39.609634   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:39.609544   67552 retry.go:31] will retry after 726.889681ms: waiting for machine to come up
	I0429 20:05:39.327634   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.161845487s)
	I0429 20:05:39.327673   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.161870572s)
	I0429 20:05:39.327710   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0429 20:05:39.327675   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0429 20:05:39.327737   66218 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:39.327748   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0: (2.16027023s)
	I0429 20:05:39.327805   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:39.327811   66218 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0429 20:05:39.327821   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0: (2.153617598s)
	I0429 20:05:39.327846   66218 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:39.327878   66218 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0429 20:05:39.327891   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (2.147303278s)
	I0429 20:05:39.327910   66218 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:39.327929   66218 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0429 20:05:39.327944   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327954   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.612190652s)
	I0429 20:05:39.327960   66218 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:39.327984   66218 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0429 20:05:39.328035   66218 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:39.328061   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327991   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327886   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.333555   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:39.343257   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:41.263038   66218 ssh_runner.go:235] Completed: which crictl: (1.934889703s)
	I0429 20:05:41.263103   66218 ssh_runner.go:235] Completed: which crictl: (1.93491368s)
	I0429 20:05:41.263121   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:41.263132   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.935299869s)
	I0429 20:05:41.263153   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0: (1.929577799s)
	I0429 20:05:41.263155   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0429 20:05:41.263217   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.919934007s)
	I0429 20:05:41.263221   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0429 20:05:41.263248   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:41.263251   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 20:05:41.263290   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:41.263301   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:41.263343   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:41.263159   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:40.338292   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:40.338823   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:40.338864   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:40.338757   67552 retry.go:31] will retry after 1.310400969s: waiting for machine to come up
	I0429 20:05:41.651107   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:41.651625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:41.651670   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:41.651575   67552 retry.go:31] will retry after 1.769756679s: waiting for machine to come up
	I0429 20:05:43.423326   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:43.423829   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:43.423869   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:43.423790   67552 retry.go:31] will retry after 1.748237944s: waiting for machine to come up
	I0429 20:05:44.084051   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.820737476s)
	I0429 20:05:44.084139   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.820774517s)
	I0429 20:05:44.084167   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.820842646s)
	I0429 20:05:44.084186   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0429 20:05:44.084142   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0429 20:05:44.084202   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0429 20:05:44.084211   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:44.084065   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0: (2.820919138s)
	I0429 20:05:44.084244   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0429 20:05:44.084260   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:44.084272   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (2.82086612s)
	I0429 20:05:44.084305   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0429 20:05:44.084331   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:44.084375   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:44.091151   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0429 20:05:46.553783   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.469493694s)
	I0429 20:05:46.553882   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0429 20:05:46.553912   66218 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:46.553837   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.469479626s)
	I0429 20:05:46.553973   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:46.553975   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0429 20:05:47.510118   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0429 20:05:47.510169   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:47.510212   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:45.173157   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:45.173617   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:45.173642   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:45.173563   67552 retry.go:31] will retry after 2.784243469s: waiting for machine to come up
	I0429 20:05:47.959942   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:47.960473   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:47.960508   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:47.960410   67552 retry.go:31] will retry after 3.046526969s: waiting for machine to come up
	I0429 20:05:49.069163   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.55892426s)
	I0429 20:05:49.069202   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0429 20:05:49.069231   66218 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:49.069276   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:51.007941   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:51.008230   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:51.008253   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:51.008213   67552 retry.go:31] will retry after 4.220985004s: waiting for machine to come up
	I0429 20:05:56.579154   66875 start.go:364] duration metric: took 3m10.972135355s to acquireMachinesLock for "default-k8s-diff-port-866143"
	I0429 20:05:56.579208   66875 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:56.579230   66875 fix.go:54] fixHost starting: 
	I0429 20:05:56.579615   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:56.579655   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:56.599113   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0429 20:05:56.599627   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:56.600173   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:05:56.600198   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:56.600488   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:56.600694   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:05:56.600849   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:05:56.602291   66875 fix.go:112] recreateIfNeeded on default-k8s-diff-port-866143: state=Stopped err=<nil>
	I0429 20:05:56.602315   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	W0429 20:05:56.602456   66875 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:56.605006   66875 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-866143" ...
	I0429 20:05:53.062693   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.993382111s)
	I0429 20:05:53.062730   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0429 20:05:53.062757   66218 cache_images.go:123] Successfully loaded all cached images
	I0429 20:05:53.062762   66218 cache_images.go:92] duration metric: took 16.261337424s to LoadCachedImages
	I0429 20:05:53.062770   66218 kubeadm.go:928] updating node { 192.168.39.235 8443 v1.30.0 crio true true} ...
	I0429 20:05:53.062893   66218 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-456788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:05:53.062994   66218 ssh_runner.go:195] Run: crio config
	I0429 20:05:53.116289   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:05:53.116311   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:05:53.116322   66218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:05:53.116340   66218 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.235 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-456788 NodeName:no-preload-456788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:05:53.116516   66218 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-456788"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:05:53.116592   66218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:05:53.128095   66218 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:05:53.128174   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:05:53.138786   66218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0429 20:05:53.158151   66218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:05:53.176440   66218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:05:53.195348   66218 ssh_runner.go:195] Run: grep 192.168.39.235	control-plane.minikube.internal$ /etc/hosts
	I0429 20:05:53.199408   66218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:53.212407   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:53.349752   66218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:05:53.368381   66218 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788 for IP: 192.168.39.235
	I0429 20:05:53.368401   66218 certs.go:194] generating shared ca certs ...
	I0429 20:05:53.368415   66218 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:05:53.368565   66218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:05:53.368609   66218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:05:53.368619   66218 certs.go:256] generating profile certs ...
	I0429 20:05:53.368697   66218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.key
	I0429 20:05:53.368751   66218 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.key.5f45c78c
	I0429 20:05:53.368785   66218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.key
	I0429 20:05:53.368889   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:05:53.368915   66218 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:05:53.368921   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:05:53.368944   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:05:53.368972   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:05:53.368993   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:05:53.369029   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:53.369624   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:05:53.428403   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:05:53.467050   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:05:53.501319   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:05:53.528828   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:05:53.553742   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:05:53.582308   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:05:53.609324   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:05:53.636730   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:05:53.663388   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:05:53.690949   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:05:53.717113   66218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:05:53.735784   66218 ssh_runner.go:195] Run: openssl version
	I0429 20:05:53.741879   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:05:53.752930   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.757811   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.757861   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.763798   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:05:53.775019   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:05:53.786654   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.791457   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.791500   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.797608   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:05:53.809139   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:05:53.820927   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.826384   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.826441   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.832798   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:05:53.844300   66218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:05:53.849139   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:05:53.855556   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:05:53.861716   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:05:53.868390   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:05:53.874740   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:05:53.881101   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:05:53.887688   66218 kubeadm.go:391] StartCluster: {Name:no-preload-456788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:05:53.887807   66218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:05:53.887858   66218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:05:53.930491   66218 cri.go:89] found id: ""
	I0429 20:05:53.930563   66218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:05:53.941016   66218 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:05:53.941037   66218 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:05:53.941042   66218 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:05:53.941081   66218 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:05:53.950651   66218 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:05:53.951536   66218 kubeconfig.go:125] found "no-preload-456788" server: "https://192.168.39.235:8443"
	I0429 20:05:53.953451   66218 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:05:53.962857   66218 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.235
	I0429 20:05:53.962879   66218 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:05:53.962889   66218 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:05:53.962932   66218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:05:54.000841   66218 cri.go:89] found id: ""
	I0429 20:05:54.000909   66218 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:05:54.018221   66218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:05:54.028524   66218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:05:54.028556   66218 kubeadm.go:156] found existing configuration files:
	
	I0429 20:05:54.028600   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:05:54.038717   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:05:54.038807   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:05:54.049350   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:05:54.059483   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:05:54.059548   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:05:54.069518   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:05:54.078900   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:05:54.078953   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:05:54.088652   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:05:54.098545   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:05:54.098596   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:05:54.108351   66218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:05:54.118645   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:54.236330   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:55.859211   66218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.622843221s)
	I0429 20:05:55.859254   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.075993   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.175176   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.274249   66218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:05:56.274469   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:56.775315   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:57.274840   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:57.315656   66218 api_server.go:72] duration metric: took 1.041421989s to wait for apiserver process to appear ...
	I0429 20:05:57.315697   66218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:05:57.315719   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:05:57.316669   66218 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": dial tcp 192.168.39.235:8443: connect: connection refused
	I0429 20:05:55.230409   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230860   66615 main.go:141] libmachine: (old-k8s-version-919612) Found IP for machine: 192.168.72.240
	I0429 20:05:55.230889   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has current primary IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230898   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserving static IP address...
	I0429 20:05:55.231252   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.231287   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | skip adding static IP to network mk-old-k8s-version-919612 - found existing host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"}
	I0429 20:05:55.231305   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserved static IP address: 192.168.72.240
	I0429 20:05:55.231319   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting for SSH to be available...
	I0429 20:05:55.231335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Getting to WaitForSSH function...
	I0429 20:05:55.233198   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.233500   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH client type: external
	I0429 20:05:55.233671   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa (-rw-------)
	I0429 20:05:55.233706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:55.233730   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | About to run SSH command:
	I0429 20:05:55.233747   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | exit 0
	I0429 20:05:55.354242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:55.354584   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetConfigRaw
	I0429 20:05:55.355221   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.357791   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.358276   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358564   66615 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json ...
	I0429 20:05:55.358786   66615 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:55.358807   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:55.359037   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.361536   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.361861   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.361885   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.362048   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.362247   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362416   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362568   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.362733   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.362930   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.362943   66615 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:55.462364   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:55.462388   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462632   66615 buildroot.go:166] provisioning hostname "old-k8s-version-919612"
	I0429 20:05:55.462669   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462852   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.465335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465674   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.465706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465836   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.466034   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466208   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466366   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.466525   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.466729   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.466745   66615 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-919612 && echo "old-k8s-version-919612" | sudo tee /etc/hostname
	I0429 20:05:55.596239   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-919612
	
	I0429 20:05:55.596281   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.599221   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599575   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.599606   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599770   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.599970   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600122   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600316   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.600498   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.600667   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.600690   66615 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-919612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-919612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-919612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:55.716588   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:55.716621   66615 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:55.716647   66615 buildroot.go:174] setting up certificates
	I0429 20:05:55.716658   66615 provision.go:84] configureAuth start
	I0429 20:05:55.716671   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.716956   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.719569   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.719919   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.719956   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.720095   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.722484   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.722876   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.722912   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.723036   66615 provision.go:143] copyHostCerts
	I0429 20:05:55.723087   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:55.723097   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:55.723158   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:55.723253   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:55.723262   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:55.723280   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:55.723336   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:55.723342   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:55.723358   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:55.723404   66615 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-919612 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-919612]
	I0429 20:05:55.878639   66615 provision.go:177] copyRemoteCerts
	I0429 20:05:55.878724   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:55.878750   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.881746   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882306   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.882358   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882540   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.882743   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.882986   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.883139   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:55.973158   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:56.003094   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0429 20:05:56.031670   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:56.059049   66615 provision.go:87] duration metric: took 342.376371ms to configureAuth
	I0429 20:05:56.059091   66615 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:56.059335   66615 config.go:182] Loaded profile config "old-k8s-version-919612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 20:05:56.059441   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.062416   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.062887   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.062921   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.063082   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.063322   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063521   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063688   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.063901   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.064066   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.064082   66615 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:56.342484   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:56.342511   66615 machine.go:97] duration metric: took 983.711183ms to provisionDockerMachine
	I0429 20:05:56.342525   66615 start.go:293] postStartSetup for "old-k8s-version-919612" (driver="kvm2")
	I0429 20:05:56.342540   66615 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:56.342589   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.342931   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:56.342983   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.345399   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345710   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.345731   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345869   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.346047   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.346233   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.346418   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.431189   66615 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:56.435878   66615 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:56.435903   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:56.435983   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:56.436086   66615 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:56.436170   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:56.445841   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:56.472683   66615 start.go:296] duration metric: took 130.146591ms for postStartSetup
	I0429 20:05:56.472715   66615 fix.go:56] duration metric: took 21.31705375s for fixHost
	I0429 20:05:56.472736   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.475127   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.475492   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475624   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.475857   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476055   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476211   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.476378   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.476536   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.476547   66615 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:05:56.578999   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421156.548872445
	
	I0429 20:05:56.579028   66615 fix.go:216] guest clock: 1714421156.548872445
	I0429 20:05:56.579040   66615 fix.go:229] Guest: 2024-04-29 20:05:56.548872445 +0000 UTC Remote: 2024-04-29 20:05:56.472718546 +0000 UTC m=+226.572342220 (delta=76.153899ms)
	I0429 20:05:56.579068   66615 fix.go:200] guest clock delta is within tolerance: 76.153899ms
	I0429 20:05:56.579076   66615 start.go:83] releasing machines lock for "old-k8s-version-919612", held for 21.423436193s
	I0429 20:05:56.579111   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.579407   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:56.582338   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582673   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.582711   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582856   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583365   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583543   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583625   66615 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:56.583667   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.583765   66615 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:56.583805   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.586263   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586552   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586618   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586656   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586891   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.586953   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586989   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.587060   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587170   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.587240   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587310   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587458   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587462   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.587600   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.672678   66615 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:56.694175   66615 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:56.859009   66615 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:56.865723   66615 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:56.865798   66615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:56.885686   66615 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:56.885714   66615 start.go:494] detecting cgroup driver to use...
	I0429 20:05:56.885805   66615 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:56.909082   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:56.931583   66615 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:56.931646   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:56.953524   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:56.976170   66615 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:57.122813   66615 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:57.315725   66615 docker.go:233] disabling docker service ...
	I0429 20:05:57.315786   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:57.333927   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:57.350022   66615 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:57.525787   66615 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:57.685802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:57.703246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:57.730558   66615 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 20:05:57.730618   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.747081   66615 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:57.747133   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.760168   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.773553   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.787609   66615 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:57.800532   66615 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:57.813582   66615 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:57.813669   66615 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:57.832224   66615 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:57.844783   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:57.991666   66615 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:58.183635   66615 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:58.183718   66615 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:58.189441   66615 start.go:562] Will wait 60s for crictl version
	I0429 20:05:58.189509   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:05:58.194049   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:58.250751   66615 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:58.250839   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.292368   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.336121   66615 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0429 20:05:58.337389   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:58.340707   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341125   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:58.341153   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341387   66615 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:58.346434   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:58.361081   66615 kubeadm.go:877] updating cluster {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:58.361242   66615 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 20:05:58.361307   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:58.414304   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:05:58.414366   66615 ssh_runner.go:195] Run: which lz4
	I0429 20:05:58.420584   66615 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:05:58.425682   66615 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:05:58.425712   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0429 20:05:56.606748   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Start
	I0429 20:05:56.606929   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring networks are active...
	I0429 20:05:56.607627   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring network default is active
	I0429 20:05:56.608028   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring network mk-default-k8s-diff-port-866143 is active
	I0429 20:05:56.608557   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Getting domain xml...
	I0429 20:05:56.609325   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Creating domain...
	I0429 20:05:57.911657   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting to get IP...
	I0429 20:05:57.912705   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:57.913118   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:57.913211   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:57.913104   67743 retry.go:31] will retry after 298.590493ms: waiting for machine to come up
	I0429 20:05:58.213730   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.214424   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.214578   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:58.214487   67743 retry.go:31] will retry after 375.439886ms: waiting for machine to come up
	I0429 20:05:58.592145   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.592671   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.592700   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:58.592626   67743 retry.go:31] will retry after 432.890106ms: waiting for machine to come up
	I0429 20:05:59.027344   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.027782   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.027812   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:59.027732   67743 retry.go:31] will retry after 547.616894ms: waiting for machine to come up
	I0429 20:05:59.576555   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.577116   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.577140   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:59.577058   67743 retry.go:31] will retry after 662.088326ms: waiting for machine to come up
	I0429 20:06:00.240907   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.241712   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.241744   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:00.241667   67743 retry.go:31] will retry after 691.874394ms: waiting for machine to come up
	I0429 20:05:57.816218   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.079778   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:01.079817   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:01.079832   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.112008   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:01.112043   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:01.316358   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.322401   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:01.322437   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:01.815974   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.825156   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:01.825219   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:02.316473   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:02.328725   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:02.328763   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:02.816674   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:02.822826   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:02.822866   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:03.315863   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:03.323314   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:03.323366   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:03.816529   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:03.822521   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:03.822556   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:04.316336   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:04.325750   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0429 20:06:04.337308   66218 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:04.337348   66218 api_server.go:131] duration metric: took 7.02164287s to wait for apiserver health ...
	I0429 20:06:04.337361   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:06:04.337370   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:04.505344   66218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:00.520217   66615 crio.go:462] duration metric: took 2.099664395s to copy over tarball
	I0429 20:06:00.520314   66615 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:04.082476   66615 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562128598s)
	I0429 20:06:04.082527   66615 crio.go:469] duration metric: took 3.562271241s to extract the tarball
	I0429 20:06:04.082538   66615 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:04.129338   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:04.177683   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:06:04.177709   66615 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:06:04.177762   66615 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.177798   66615 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.177817   66615 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.177834   66615 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.177835   66615 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.177783   66615 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.177897   66615 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0429 20:06:04.177972   66615 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179282   66615 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.179360   66615 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.179361   66615 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.179320   66615 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.179331   66615 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.179299   66615 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0429 20:06:04.323997   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376145   66615 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0429 20:06:04.376210   66615 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376261   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.381592   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.420565   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0429 20:06:04.440670   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0429 20:06:04.461763   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.499283   66615 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0429 20:06:04.499347   66615 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0429 20:06:04.499404   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513860   66615 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0429 20:06:04.513900   66615 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.513946   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513988   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0429 20:06:04.548990   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.556713   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.556942   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.556965   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0429 20:06:04.566227   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.598982   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.656930   66615 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0429 20:06:04.656980   66615 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.657038   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.724922   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0429 20:06:04.725179   66615 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0429 20:06:04.725218   66615 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.725262   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732375   66615 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0429 20:06:04.732429   66615 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.732482   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732492   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.732483   66615 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0429 20:06:04.732669   66615 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.732726   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.735419   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.739785   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.742496   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.834684   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0429 20:06:04.834754   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0429 20:06:04.834811   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0429 20:06:04.847076   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0429 20:06:00.935382   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.935935   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.935979   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:00.935902   67743 retry.go:31] will retry after 1.024898519s: waiting for machine to come up
	I0429 20:06:01.962446   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:01.963109   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:01.963140   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:01.963059   67743 retry.go:31] will retry after 1.19225855s: waiting for machine to come up
	I0429 20:06:03.157257   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:03.157781   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:03.157843   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:03.157738   67743 retry.go:31] will retry after 1.699779549s: waiting for machine to come up
	I0429 20:06:04.859190   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:04.859622   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:04.859670   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:04.859565   67743 retry.go:31] will retry after 2.307475318s: waiting for machine to come up
	I0429 20:06:04.671477   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:04.684650   66218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:04.718146   66218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:04.908181   66218 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:04.908213   66218 system_pods.go:61] "coredns-7db6d8ff4d-d4kwk" [215ff4b8-3ae5-49a7-8a9f-6acb4d176b93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:04.908223   66218 system_pods.go:61] "etcd-no-preload-456788" [3ec7e177-1b68-4bff-aa4d-803f5346e1be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:04.908231   66218 system_pods.go:61] "kube-apiserver-no-preload-456788" [5e8bf0b0-9669-4f0c-8da1-523589158b16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:04.908236   66218 system_pods.go:61] "kube-controller-manager-no-preload-456788" [515363f7-bde1-4ba7-a5a9-6779f673afaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:04.908240   66218 system_pods.go:61] "kube-proxy-slnph" [29f503bf-ce19-425c-8174-2b8e7b27a424] Running
	I0429 20:06:04.908253   66218 system_pods.go:61] "kube-scheduler-no-preload-456788" [4f394af0-6452-49dd-9770-7c6bfcff3936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:04.908258   66218 system_pods.go:61] "metrics-server-569cc877fc-6mpnm" [5f183615-a243-410a-a524-ebdaa65e6400] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:04.908262   66218 system_pods.go:61] "storage-provisioner" [f74a777d-a3d7-4682-bad0-44bb993a2d43] Running
	I0429 20:06:04.908270   66218 system_pods.go:74] duration metric: took 190.098153ms to wait for pod list to return data ...
	I0429 20:06:04.908278   66218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:05.212876   66218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:05.212913   66218 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:05.212929   66218 node_conditions.go:105] duration metric: took 304.645545ms to run NodePressure ...
	I0429 20:06:05.212950   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:05.913252   66218 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:05.928914   66218 kubeadm.go:733] kubelet initialised
	I0429 20:06:05.928947   66218 kubeadm.go:734] duration metric: took 15.668535ms waiting for restarted kubelet to initialise ...
	I0429 20:06:05.928957   66218 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:05.937357   66218 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:05.091766   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:05.269730   66615 cache_images.go:92] duration metric: took 1.092006107s to LoadCachedImages
	W0429 20:06:05.269839   66615 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0429 20:06:05.269857   66615 kubeadm.go:928] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I0429 20:06:05.269988   66615 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-919612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:05.270088   66615 ssh_runner.go:195] Run: crio config
	I0429 20:06:05.322439   66615 cni.go:84] Creating CNI manager for ""
	I0429 20:06:05.322471   66615 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:05.322486   66615 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:05.322522   66615 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-919612 NodeName:old-k8s-version-919612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 20:06:05.322746   66615 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-919612"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:05.322810   66615 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 20:06:05.340981   66615 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:05.341058   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:05.357048   66615 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0429 20:06:05.384352   66615 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:05.407887   66615 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0429 20:06:05.431531   66615 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:05.437567   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:05.457652   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:05.610358   66615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:05.641538   66615 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612 for IP: 192.168.72.240
	I0429 20:06:05.641568   66615 certs.go:194] generating shared ca certs ...
	I0429 20:06:05.641583   66615 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:05.641758   66615 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:05.641831   66615 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:05.641843   66615 certs.go:256] generating profile certs ...
	I0429 20:06:05.641948   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.key
	I0429 20:06:05.642020   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key.5df5e618
	I0429 20:06:05.642083   66615 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key
	I0429 20:06:05.642256   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:05.642304   66615 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:05.642325   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:05.642364   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:05.642401   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:05.642435   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:05.642489   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:05.643156   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:05.691350   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:05.734434   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:05.773056   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:05.819778   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 20:06:05.868256   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:05.911589   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:05.957714   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 20:06:06.002120   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:06.039736   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:06.079636   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:06.118317   66615 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:06.145932   66615 ssh_runner.go:195] Run: openssl version
	I0429 20:06:06.152970   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:06.166609   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.171939   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.172033   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.179153   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:06.193491   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:06.207800   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214803   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214876   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.222154   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:06.236908   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:06.254197   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260797   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260863   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.267635   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:06.282727   66615 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:06.289580   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:06.301014   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:06.310503   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:06.318708   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:06.325718   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:06.332690   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:06.339914   66615 kubeadm.go:391] StartCluster: {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:06.340012   66615 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:06.340069   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.391511   66615 cri.go:89] found id: ""
	I0429 20:06:06.391618   66615 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:06.408955   66615 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:06.408985   66615 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:06.408991   66615 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:06.409060   66615 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:06.425276   66615 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:06.426397   66615 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-919612" does not appear in /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:06.427298   66615 kubeconfig.go:62] /home/jenkins/minikube-integration/18774-7754/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-919612" cluster setting kubeconfig missing "old-k8s-version-919612" context setting]
	I0429 20:06:06.428287   66615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:06.429908   66615 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:06.443630   66615 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.240
	I0429 20:06:06.443674   66615 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:06.443686   66615 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:06.443753   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.486251   66615 cri.go:89] found id: ""
	I0429 20:06:06.486339   66615 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:06.507136   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:06.523798   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:06.523828   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:06.523887   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:06.536668   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:06.536735   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:06.547800   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:06.560435   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:06.560517   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:06.572227   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.582772   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:06.582825   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.594168   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:06.605940   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:06.606013   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:06.621829   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:06.637520   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:06.779910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:07.921143   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.141191032s)
	I0429 20:06:07.921178   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.172381   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.276243   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.398312   66615 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:08.398424   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:08.899388   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.399344   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.898731   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:07.168679   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:07.169214   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:07.169264   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:07.169146   67743 retry.go:31] will retry after 2.050354993s: waiting for machine to come up
	I0429 20:06:09.221915   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:09.222545   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:09.222581   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:09.222449   67743 retry.go:31] will retry after 2.544889222s: waiting for machine to come up
	I0429 20:06:07.947247   66218 pod_ready.go:102] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:10.449364   66218 pod_ready.go:102] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:10.943731   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:10.943754   66218 pod_ready.go:81] duration metric: took 5.006367348s for pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:10.943763   66218 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.453825   66218 pod_ready.go:92] pod "etcd-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.453853   66218 pod_ready.go:81] duration metric: took 1.510082371s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.453865   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.462971   66218 pod_ready.go:92] pod "kube-apiserver-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.462997   66218 pod_ready.go:81] duration metric: took 9.123374ms for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.463011   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.471032   66218 pod_ready.go:92] pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.471066   66218 pod_ready.go:81] duration metric: took 8.024113ms for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.471077   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-slnph" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.478671   66218 pod_ready.go:92] pod "kube-proxy-slnph" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.478695   66218 pod_ready.go:81] duration metric: took 7.609313ms for pod "kube-proxy-slnph" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.478706   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.542851   66218 pod_ready.go:92] pod "kube-scheduler-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.542875   66218 pod_ready.go:81] duration metric: took 64.16109ms for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.542888   66218 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:10.399055   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:10.898742   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.399250   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.898511   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.399301   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.899399   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.399242   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.899417   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.398526   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.898976   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.768576   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:11.768967   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:11.769003   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:11.768924   67743 retry.go:31] will retry after 3.829285986s: waiting for machine to come up
	I0429 20:06:17.032004   65980 start.go:364] duration metric: took 56.727982697s to acquireMachinesLock for "embed-certs-161370"
	I0429 20:06:17.032074   65980 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:06:17.032085   65980 fix.go:54] fixHost starting: 
	I0429 20:06:17.032452   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:17.032485   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:17.050767   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0429 20:06:17.051181   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:17.051655   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:06:17.051680   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:17.052002   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:17.052188   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:17.052363   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:06:17.053975   65980 fix.go:112] recreateIfNeeded on embed-certs-161370: state=Stopped err=<nil>
	I0429 20:06:17.054002   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	W0429 20:06:17.054167   65980 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:06:17.056054   65980 out.go:177] * Restarting existing kvm2 VM for "embed-certs-161370" ...
	I0429 20:06:14.550615   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:17.050288   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:17.057452   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Start
	I0429 20:06:17.057630   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring networks are active...
	I0429 20:06:17.058381   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring network default is active
	I0429 20:06:17.058680   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring network mk-embed-certs-161370 is active
	I0429 20:06:17.059024   65980 main.go:141] libmachine: (embed-certs-161370) Getting domain xml...
	I0429 20:06:17.059697   65980 main.go:141] libmachine: (embed-certs-161370) Creating domain...
	I0429 20:06:15.599423   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.599897   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has current primary IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.599915   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Found IP for machine: 192.168.61.106
	I0429 20:06:15.599929   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Reserving static IP address...
	I0429 20:06:15.600318   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Reserved static IP address: 192.168.61.106
	I0429 20:06:15.600360   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-866143", mac: "52:54:00:af:de:09", ip: "192.168.61.106"} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.600375   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for SSH to be available...
	I0429 20:06:15.600405   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | skip adding static IP to network mk-default-k8s-diff-port-866143 - found existing host DHCP lease matching {name: "default-k8s-diff-port-866143", mac: "52:54:00:af:de:09", ip: "192.168.61.106"}
	I0429 20:06:15.600423   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Getting to WaitForSSH function...
	I0429 20:06:15.602983   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.603379   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.603414   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.603581   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Using SSH client type: external
	I0429 20:06:15.603611   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa (-rw-------)
	I0429 20:06:15.603675   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:06:15.603701   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | About to run SSH command:
	I0429 20:06:15.603733   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | exit 0
	I0429 20:06:15.734933   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | SSH cmd err, output: <nil>: 
	I0429 20:06:15.735306   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetConfigRaw
	I0429 20:06:15.735918   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:15.738878   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.739349   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.739385   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.739745   66875 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:06:15.739943   66875 machine.go:94] provisionDockerMachine start ...
	I0429 20:06:15.739966   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:15.740215   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.742731   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.743068   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.743097   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.743253   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.743448   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.743592   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.743729   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.743859   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.744066   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.744080   66875 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:06:15.855258   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:06:15.855292   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:15.855585   66875 buildroot.go:166] provisioning hostname "default-k8s-diff-port-866143"
	I0429 20:06:15.855604   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:15.855792   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.858278   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.858644   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.858672   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.858802   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.858996   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.859179   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.859327   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.859498   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.859667   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.859682   66875 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-866143 && echo "default-k8s-diff-port-866143" | sudo tee /etc/hostname
	I0429 20:06:15.986031   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-866143
	
	I0429 20:06:15.986094   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.989211   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.989633   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.989666   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.989858   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.990078   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.990281   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.990441   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.990591   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.990746   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.990763   66875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-866143' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-866143/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-866143' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:06:16.119358   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:06:16.119389   66875 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:06:16.119420   66875 buildroot.go:174] setting up certificates
	I0429 20:06:16.119431   66875 provision.go:84] configureAuth start
	I0429 20:06:16.119442   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:16.119741   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:16.122611   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.122991   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.123016   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.123180   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.125378   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.125673   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.125713   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.125805   66875 provision.go:143] copyHostCerts
	I0429 20:06:16.125883   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:06:16.125896   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:06:16.125963   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:06:16.126112   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:06:16.126125   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:06:16.126152   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:06:16.126234   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:06:16.126245   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:06:16.126270   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:06:16.126348   66875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-866143 san=[127.0.0.1 192.168.61.106 default-k8s-diff-port-866143 localhost minikube]
	I0429 20:06:16.280583   66875 provision.go:177] copyRemoteCerts
	I0429 20:06:16.280641   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:06:16.280665   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.283452   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.283760   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.283800   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.283999   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.284175   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.284335   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.284428   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:16.374564   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:06:16.408695   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0429 20:06:16.441975   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:06:16.470921   66875 provision.go:87] duration metric: took 351.479703ms to configureAuth
	I0429 20:06:16.470946   66875 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:06:16.471124   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:16.471205   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.473799   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.474105   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.474139   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.474291   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.474502   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.474692   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.474830   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.474995   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:16.475152   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:16.475167   66875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:06:16.774044   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:06:16.774093   66875 machine.go:97] duration metric: took 1.034135495s to provisionDockerMachine
	I0429 20:06:16.774108   66875 start.go:293] postStartSetup for "default-k8s-diff-port-866143" (driver="kvm2")
	I0429 20:06:16.774123   66875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:06:16.774148   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:16.774509   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:06:16.774539   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.777163   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.777603   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.777639   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.777779   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.777949   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.778109   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.778259   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:16.866104   66875 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:06:16.870760   66875 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:06:16.870780   66875 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:06:16.870839   66875 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:06:16.870916   66875 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:06:16.871003   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:06:16.881137   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:16.911284   66875 start.go:296] duration metric: took 137.163661ms for postStartSetup
	I0429 20:06:16.911318   66875 fix.go:56] duration metric: took 20.332102679s for fixHost
	I0429 20:06:16.911337   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.914440   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.914810   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.914838   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.915087   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.915287   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.915511   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.915692   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.915886   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:16.916034   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:16.916045   66875 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:06:17.031867   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421177.003309274
	
	I0429 20:06:17.031892   66875 fix.go:216] guest clock: 1714421177.003309274
	I0429 20:06:17.031900   66875 fix.go:229] Guest: 2024-04-29 20:06:17.003309274 +0000 UTC Remote: 2024-04-29 20:06:16.911322778 +0000 UTC m=+211.453402116 (delta=91.986496ms)
	I0429 20:06:17.031921   66875 fix.go:200] guest clock delta is within tolerance: 91.986496ms
	I0429 20:06:17.031928   66875 start.go:83] releasing machines lock for "default-k8s-diff-port-866143", held for 20.452741912s
	I0429 20:06:17.031957   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.032261   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:17.035096   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.035467   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.035497   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.035620   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036246   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036425   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036515   66875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:06:17.036569   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:17.036698   66875 ssh_runner.go:195] Run: cat /version.json
	I0429 20:06:17.036726   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:17.039300   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039595   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039813   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.039848   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039907   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:17.039984   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.040017   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.040069   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:17.040172   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:17.040230   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:17.040329   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:17.040382   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:17.040483   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:17.040636   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:17.137510   66875 ssh_runner.go:195] Run: systemctl --version
	I0429 20:06:17.160834   66875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:06:17.320792   66875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:06:17.328367   66875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:06:17.328448   66875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:06:17.349698   66875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:06:17.349724   66875 start.go:494] detecting cgroup driver to use...
	I0429 20:06:17.349807   66875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:06:17.372156   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:06:17.388142   66875 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:06:17.388206   66875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:06:17.406108   66875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:06:17.422323   66875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:06:17.555079   66875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:06:17.727126   66875 docker.go:233] disabling docker service ...
	I0429 20:06:17.727194   66875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:06:17.743136   66875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:06:17.757045   66875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:06:17.885705   66875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:06:18.021993   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:06:18.039020   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:06:18.063267   66875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:06:18.063330   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.076473   66875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:06:18.076545   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.089566   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.102912   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.116940   66875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:06:18.130940   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.150505   66875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.177724   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.191088   66875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:06:18.203560   66875 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:06:18.203635   66875 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:06:18.221087   66875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:06:18.233719   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:18.383406   66875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:06:18.543941   66875 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:06:18.544029   66875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:06:18.550828   66875 start.go:562] Will wait 60s for crictl version
	I0429 20:06:18.550891   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:06:18.556158   66875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:06:18.607004   66875 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:06:18.607083   66875 ssh_runner.go:195] Run: crio --version
	I0429 20:06:18.638282   66875 ssh_runner.go:195] Run: crio --version
	I0429 20:06:18.674135   66875 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:06:15.399474   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:15.899352   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.899106   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.899205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.399351   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.899319   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.399303   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.898824   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.675590   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:18.678673   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:18.679055   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:18.679096   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:18.679272   66875 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0429 20:06:18.685110   66875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:18.705804   66875 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:06:18.705967   66875 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:06:18.706036   66875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:18.750754   66875 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:06:18.750823   66875 ssh_runner.go:195] Run: which lz4
	I0429 20:06:18.755893   66875 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:06:18.760892   66875 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:06:18.760921   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:06:19.055680   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:21.552080   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:18.301855   65980 main.go:141] libmachine: (embed-certs-161370) Waiting to get IP...
	I0429 20:06:18.302804   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.303231   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.303273   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.303198   67921 retry.go:31] will retry after 279.123731ms: waiting for machine to come up
	I0429 20:06:18.584013   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.584661   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.584703   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.584630   67921 retry.go:31] will retry after 239.910483ms: waiting for machine to come up
	I0429 20:06:18.825978   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.826393   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.826425   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.826349   67921 retry.go:31] will retry after 312.324444ms: waiting for machine to come up
	I0429 20:06:19.139999   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:19.140583   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:19.140611   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:19.140535   67921 retry.go:31] will retry after 498.525047ms: waiting for machine to come up
	I0429 20:06:19.640244   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:19.640797   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:19.640828   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:19.640756   67921 retry.go:31] will retry after 479.301061ms: waiting for machine to come up
	I0429 20:06:20.121396   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:20.121982   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:20.122015   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:20.121941   67921 retry.go:31] will retry after 706.389673ms: waiting for machine to come up
	I0429 20:06:20.829691   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:20.830191   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:20.830247   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:20.830166   67921 retry.go:31] will retry after 1.145397308s: waiting for machine to come up
	I0429 20:06:21.977290   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:21.977747   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:21.977779   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:21.977691   67921 retry.go:31] will retry after 955.977029ms: waiting for machine to come up
	I0429 20:06:20.399233   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.898571   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.398855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.898885   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.399328   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.398965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.899248   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.398833   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.899039   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.561047   66875 crio.go:462] duration metric: took 1.805186908s to copy over tarball
	I0429 20:06:20.561137   66875 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:23.264543   66875 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.703371921s)
	I0429 20:06:23.264573   66875 crio.go:469] duration metric: took 2.7034954s to extract the tarball
	I0429 20:06:23.264581   66875 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:23.303558   66875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:23.356825   66875 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 20:06:23.356854   66875 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:06:23.356873   66875 kubeadm.go:928] updating node { 192.168.61.106 8444 v1.30.0 crio true true} ...
	I0429 20:06:23.357007   66875 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-866143 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:23.357105   66875 ssh_runner.go:195] Run: crio config
	I0429 20:06:23.414195   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:06:23.414225   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:23.414237   66875 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:23.414267   66875 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.106 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-866143 NodeName:default-k8s-diff-port-866143 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:06:23.414459   66875 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.106
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-866143"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:23.414524   66875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:06:23.425977   66875 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:23.426089   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:23.437270   66875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0429 20:06:23.457613   66875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:23.479383   66875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0429 20:06:23.509517   66875 ssh_runner.go:195] Run: grep 192.168.61.106	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:23.514202   66875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:23.528721   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:23.666941   66875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:23.687710   66875 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143 for IP: 192.168.61.106
	I0429 20:06:23.687745   66875 certs.go:194] generating shared ca certs ...
	I0429 20:06:23.687768   66875 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:23.687952   66875 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:23.688005   66875 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:23.688020   66875 certs.go:256] generating profile certs ...
	I0429 20:06:23.688168   66875 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.key
	I0429 20:06:23.688260   66875 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.key.5d7fbd4b
	I0429 20:06:23.688318   66875 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.key
	I0429 20:06:23.688481   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:23.688532   66875 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:23.688548   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:23.688592   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:23.688628   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:23.688663   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:23.688722   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:23.689611   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:23.743834   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:23.783115   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:23.819086   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:23.850794   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0429 20:06:23.882477   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:23.918607   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:23.947837   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:06:23.977241   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:24.005902   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:24.034910   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:24.064119   66875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:24.083879   66875 ssh_runner.go:195] Run: openssl version
	I0429 20:06:24.090651   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:24.104929   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.110955   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.111034   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.117914   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:24.131076   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:24.144790   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.150842   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.150926   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.157842   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:24.171737   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:24.186164   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.191924   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.191995   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.199385   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:24.213392   66875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:24.219369   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:24.226784   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:24.234655   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:24.242406   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:24.249904   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:24.257400   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:24.264165   66875 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:24.264290   66875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:24.264353   66875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:24.310126   66875 cri.go:89] found id: ""
	I0429 20:06:24.310197   66875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:24.322134   66875 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:24.322155   66875 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:24.322160   66875 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:24.322223   66875 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:24.337713   66875 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:24.339184   66875 kubeconfig.go:125] found "default-k8s-diff-port-866143" server: "https://192.168.61.106:8444"
	I0429 20:06:24.342237   66875 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:24.353500   66875 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.106
	I0429 20:06:24.353545   66875 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:24.353560   66875 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:24.353627   66875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:24.399835   66875 cri.go:89] found id: ""
	I0429 20:06:24.399918   66875 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:24.426456   66875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:24.440261   66875 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:24.440282   66875 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:24.440376   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0429 20:06:24.450699   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:24.450766   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:24.462870   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0429 20:06:24.474894   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:24.474961   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:24.488607   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0429 20:06:24.499626   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:24.499685   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:24.514156   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0429 20:06:24.525958   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:24.526018   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:24.537063   66875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:24.548503   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:24.687916   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:24.051367   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:26.550970   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:22.935362   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:22.935797   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:22.935827   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:22.935746   67921 retry.go:31] will retry after 1.25494649s: waiting for machine to come up
	I0429 20:06:24.192017   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:24.192613   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:24.192641   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:24.192556   67921 retry.go:31] will retry after 1.641885834s: waiting for machine to come up
	I0429 20:06:25.836686   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:25.837170   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:25.837193   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:25.837125   67921 retry.go:31] will retry after 2.794216099s: waiting for machine to come up
	I0429 20:06:25.398515   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:25.898944   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.399360   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.899294   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.399520   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.899434   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.398734   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.898479   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.399413   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.899236   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.234143   66875 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.546180467s)
	I0429 20:06:26.234181   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.502030   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.577778   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.689836   66875 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:26.689982   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.190231   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.690207   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.729434   66875 api_server.go:72] duration metric: took 1.039599386s to wait for apiserver process to appear ...
	I0429 20:06:27.729473   66875 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:06:27.729497   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:27.730016   66875 api_server.go:269] stopped: https://192.168.61.106:8444/healthz: Get "https://192.168.61.106:8444/healthz": dial tcp 192.168.61.106:8444: connect: connection refused
	I0429 20:06:28.230353   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:28.551049   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:31.051387   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:31.411151   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:31.411188   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:31.411205   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:31.424074   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:31.424106   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:31.729916   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:31.737269   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:31.737299   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:32.229834   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:32.237900   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:32.237935   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:32.730529   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:32.735043   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 200:
	ok
	I0429 20:06:32.743999   66875 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:32.744026   66875 api_server.go:131] duration metric: took 5.014546615s to wait for apiserver health ...
	I0429 20:06:32.744035   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:06:32.744041   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:32.745889   66875 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:28.633451   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:28.633950   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:28.633979   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:28.633906   67921 retry.go:31] will retry after 2.251092878s: waiting for machine to come up
	I0429 20:06:30.887722   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:30.888251   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:30.888283   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:30.888208   67921 retry.go:31] will retry after 2.941721217s: waiting for machine to come up
	I0429 20:06:32.747198   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:32.760578   66875 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:32.786719   66875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:32.797795   66875 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:32.797830   66875 system_pods.go:61] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:32.797838   66875 system_pods.go:61] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:32.797844   66875 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:32.797854   66875 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:32.797859   66875 system_pods.go:61] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 20:06:32.797866   66875 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:32.797873   66875 system_pods.go:61] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:32.797880   66875 system_pods.go:61] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 20:06:32.797888   66875 system_pods.go:74] duration metric: took 11.14839ms to wait for pod list to return data ...
	I0429 20:06:32.797902   66875 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:32.801888   66875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:32.801909   66875 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:32.801918   66875 node_conditions.go:105] duration metric: took 4.010782ms to run NodePressure ...
	I0429 20:06:32.801934   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:33.088679   66875 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:33.094165   66875 kubeadm.go:733] kubelet initialised
	I0429 20:06:33.094185   66875 kubeadm.go:734] duration metric: took 5.479589ms waiting for restarted kubelet to initialise ...
	I0429 20:06:33.094192   66875 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:33.101524   66875 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.106879   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.106911   66875 pod_ready.go:81] duration metric: took 5.352162ms for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.106923   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.106946   66875 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.111446   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.111469   66875 pod_ready.go:81] duration metric: took 4.507858ms for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.111478   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.111483   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.115613   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.115643   66875 pod_ready.go:81] duration metric: took 4.152743ms for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.115654   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.115663   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.191660   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.191695   66875 pod_ready.go:81] duration metric: took 76.012388ms for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.191707   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.191713   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.592489   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-proxy-zddtx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.592522   66875 pod_ready.go:81] duration metric: took 400.801861ms for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.592535   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-proxy-zddtx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.592544   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.990624   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.990655   66875 pod_ready.go:81] duration metric: took 398.101779ms for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.990667   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.990673   66875 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:34.391120   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:34.391148   66875 pod_ready.go:81] duration metric: took 400.467456ms for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:34.391165   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:34.391173   66875 pod_ready.go:38] duration metric: took 1.296972775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:34.391191   66875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:06:34.408817   66875 ops.go:34] apiserver oom_adj: -16
	I0429 20:06:34.408845   66875 kubeadm.go:591] duration metric: took 10.086677852s to restartPrimaryControlPlane
	I0429 20:06:34.408856   66875 kubeadm.go:393] duration metric: took 10.144698168s to StartCluster
	I0429 20:06:34.408876   66875 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:34.408961   66875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:34.411093   66875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:34.411379   66875 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:06:34.413055   66875 out.go:177] * Verifying Kubernetes components...
	I0429 20:06:34.411518   66875 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:06:34.411607   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:34.414229   66875 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414239   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:34.414261   66875 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-866143"
	I0429 20:06:34.414238   66875 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414232   66875 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414341   66875 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.414355   66875 addons.go:243] addon metrics-server should already be in state true
	I0429 20:06:34.414382   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.414381   66875 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.414396   66875 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:06:34.414439   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.414650   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414677   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.414746   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414758   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.414890   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414923   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.433279   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I0429 20:06:34.433827   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.434444   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.434474   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.434873   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.435436   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.435483   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.435739   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
	I0429 20:06:34.435746   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0429 20:06:34.436117   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.436245   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.436638   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.436678   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.436734   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.436747   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.437011   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.437057   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.437218   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.437601   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.437630   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.441092   66875 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.441118   66875 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:06:34.441146   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.441550   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.441582   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.451571   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0429 20:06:34.452041   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.452627   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.452650   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.453080   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.453401   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.455145   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0429 20:06:34.455335   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.457339   66875 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:34.455992   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.456826   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I0429 20:06:34.458912   66875 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:06:34.458925   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:06:34.458942   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.459155   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.459818   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.459836   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.460050   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.460068   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.460196   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.460406   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.460450   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.461005   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.461051   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.462529   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.462624   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.464140   66875 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:06:30.398730   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:30.898542   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.399309   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.898751   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.399374   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.899262   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.398723   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.899281   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.399356   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.899305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.463014   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.463255   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.465585   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.465598   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:06:34.465623   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:06:34.465652   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.465703   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.465892   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.466043   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.468951   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.469342   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.469407   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.469645   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.469817   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.469984   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.470137   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.484411   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0429 20:06:34.484864   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.485366   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.485396   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.485759   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.485937   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.487715   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.487962   66875 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:06:34.487975   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:06:34.487989   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.490407   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.490724   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.490748   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.490890   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.491045   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.491146   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.491274   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.618088   66875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:34.638582   66875 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-866143" to be "Ready" ...
	I0429 20:06:34.729046   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:06:34.729633   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:06:34.729649   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:06:34.752200   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:06:34.770107   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:06:34.770143   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:06:34.847081   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:06:34.847117   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:06:34.889992   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:06:35.821090   66875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091986938s)
	I0429 20:06:35.821127   66875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.068905753s)
	I0429 20:06:35.821145   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821150   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821157   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821162   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821490   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821505   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821514   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821524   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821528   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821534   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821549   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821540   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821902   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821923   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821936   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.822007   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.822024   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.828303   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.828348   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.828591   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.828606   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.828632   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.843540   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.843566   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.843860   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.843877   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.843886   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.843894   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.844127   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.844170   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.844188   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.844203   66875 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-866143"
	I0429 20:06:35.846214   66875 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0429 20:06:33.549917   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:35.550564   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:33.831181   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:33.831552   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:33.831581   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:33.831506   67921 retry.go:31] will retry after 5.040485428s: waiting for machine to come up
	I0429 20:06:35.399419   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.899244   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.398934   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.898847   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.399273   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.899102   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.398748   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.399524   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.898813   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.847674   66875 addons.go:505] duration metric: took 1.436173952s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0429 20:06:36.641963   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:38.642738   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:38.873188   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.873625   65980 main.go:141] libmachine: (embed-certs-161370) Found IP for machine: 192.168.50.184
	I0429 20:06:38.873653   65980 main.go:141] libmachine: (embed-certs-161370) Reserving static IP address...
	I0429 20:06:38.873669   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has current primary IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.874037   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "embed-certs-161370", mac: "52:54:00:e6:05:1f", ip: "192.168.50.184"} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:38.874091   65980 main.go:141] libmachine: (embed-certs-161370) Reserved static IP address: 192.168.50.184
	I0429 20:06:38.874113   65980 main.go:141] libmachine: (embed-certs-161370) DBG | skip adding static IP to network mk-embed-certs-161370 - found existing host DHCP lease matching {name: "embed-certs-161370", mac: "52:54:00:e6:05:1f", ip: "192.168.50.184"}
	I0429 20:06:38.874132   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Getting to WaitForSSH function...
	I0429 20:06:38.874151   65980 main.go:141] libmachine: (embed-certs-161370) Waiting for SSH to be available...
	I0429 20:06:38.875891   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.876205   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:38.876237   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.876401   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Using SSH client type: external
	I0429 20:06:38.876425   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa (-rw-------)
	I0429 20:06:38.876455   65980 main.go:141] libmachine: (embed-certs-161370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:06:38.876475   65980 main.go:141] libmachine: (embed-certs-161370) DBG | About to run SSH command:
	I0429 20:06:38.876486   65980 main.go:141] libmachine: (embed-certs-161370) DBG | exit 0
	I0429 20:06:39.006684   65980 main.go:141] libmachine: (embed-certs-161370) DBG | SSH cmd err, output: <nil>: 
	I0429 20:06:39.007072   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetConfigRaw
	I0429 20:06:39.007701   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:39.010189   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.010539   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.010577   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.010783   65980 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/config.json ...
	I0429 20:06:39.010970   65980 machine.go:94] provisionDockerMachine start ...
	I0429 20:06:39.010986   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:39.011196   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.013422   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.013832   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.013862   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.013986   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.014183   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.014377   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.014528   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.014710   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.014868   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.014878   65980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:06:39.119151   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:06:39.119183   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.119425   65980 buildroot.go:166] provisioning hostname "embed-certs-161370"
	I0429 20:06:39.119449   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.119606   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.122418   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.122725   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.122755   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.122894   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.123087   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.123235   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.123371   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.123547   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.123719   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.123734   65980 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-161370 && echo "embed-certs-161370" | sudo tee /etc/hostname
	I0429 20:06:39.247323   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-161370
	
	I0429 20:06:39.247360   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.250202   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.250594   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.250623   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.250761   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.250956   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.251158   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.251354   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.251536   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.251724   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.251746   65980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-161370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-161370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-161370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:06:39.370366   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:06:39.370395   65980 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:06:39.370415   65980 buildroot.go:174] setting up certificates
	I0429 20:06:39.370429   65980 provision.go:84] configureAuth start
	I0429 20:06:39.370441   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.370754   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:39.373600   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.373977   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.374011   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.374305   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.376654   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.376999   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.377032   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.377156   65980 provision.go:143] copyHostCerts
	I0429 20:06:39.377217   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:06:39.377228   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:06:39.377279   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:06:39.377367   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:06:39.377375   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:06:39.377393   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:06:39.377446   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:06:39.377453   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:06:39.377470   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:06:39.377523   65980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.embed-certs-161370 san=[127.0.0.1 192.168.50.184 embed-certs-161370 localhost minikube]
	I0429 20:06:39.441865   65980 provision.go:177] copyRemoteCerts
	I0429 20:06:39.441931   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:06:39.441954   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.445189   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.445633   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.445677   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.445918   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.446166   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.446364   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.446521   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:39.535703   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:06:39.571033   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0429 20:06:39.604181   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:06:39.639250   65980 provision.go:87] duration metric: took 268.808275ms to configureAuth
	I0429 20:06:39.639339   65980 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:06:39.639575   65980 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:39.639668   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.642544   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.642975   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.643006   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.643146   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.643348   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.643507   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.643671   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.643838   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.644011   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.644039   65980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:06:39.974134   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:06:39.974168   65980 machine.go:97] duration metric: took 963.184467ms to provisionDockerMachine
	I0429 20:06:39.974186   65980 start.go:293] postStartSetup for "embed-certs-161370" (driver="kvm2")
	I0429 20:06:39.974201   65980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:06:39.974229   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:39.974601   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:06:39.974636   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.977843   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.978295   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.978328   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.978528   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.978768   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.978939   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.979144   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.066379   65980 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:06:40.071720   65980 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:06:40.071742   65980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:06:40.071798   65980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:06:40.071875   65980 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:06:40.071965   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:06:40.082556   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:40.112774   65980 start.go:296] duration metric: took 138.571139ms for postStartSetup
	I0429 20:06:40.112827   65980 fix.go:56] duration metric: took 23.080734046s for fixHost
	I0429 20:06:40.112859   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.115931   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.116414   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.116448   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.116643   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.116859   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.117026   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.117169   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.117358   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:40.117560   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:40.117576   65980 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:06:40.223697   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421200.206855033
	
	I0429 20:06:40.223722   65980 fix.go:216] guest clock: 1714421200.206855033
	I0429 20:06:40.223732   65980 fix.go:229] Guest: 2024-04-29 20:06:40.206855033 +0000 UTC Remote: 2024-04-29 20:06:40.112832003 +0000 UTC m=+362.399028562 (delta=94.02303ms)
	I0429 20:06:40.223777   65980 fix.go:200] guest clock delta is within tolerance: 94.02303ms
	I0429 20:06:40.223782   65980 start.go:83] releasing machines lock for "embed-certs-161370", held for 23.191744513s
	I0429 20:06:40.223804   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.224106   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:40.226904   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.227299   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.227328   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.227462   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.227955   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.228117   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.228199   65980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:06:40.228238   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.228353   65980 ssh_runner.go:195] Run: cat /version.json
	I0429 20:06:40.228378   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.230943   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231151   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231370   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.231401   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231585   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.231595   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.231629   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231794   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.231806   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.231982   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.232000   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.232182   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.232197   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.232303   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.337533   65980 ssh_runner.go:195] Run: systemctl --version
	I0429 20:06:40.347252   65980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:06:40.494668   65980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:06:40.502707   65980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:06:40.502788   65980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:06:40.522261   65980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:06:40.522298   65980 start.go:494] detecting cgroup driver to use...
	I0429 20:06:40.522368   65980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:06:40.540576   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:06:40.557130   65980 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:06:40.557203   65980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:06:40.573803   65980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:06:40.589730   65980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:06:40.731625   65980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:06:40.902594   65980 docker.go:233] disabling docker service ...
	I0429 20:06:40.902665   65980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:06:40.921454   65980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:06:40.938734   65980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:06:41.081822   65980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:06:41.237778   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:06:41.254086   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:06:41.276277   65980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:06:41.276362   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.288903   65980 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:06:41.288972   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.301347   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.313639   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.325885   65980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:06:41.338215   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.350839   65980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.372124   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.385505   65980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:06:41.397626   65980 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:06:41.397704   65980 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:06:41.413915   65980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:06:41.427068   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:41.575690   65980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:06:41.748047   65980 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:06:41.748132   65980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:06:41.753313   65980 start.go:562] Will wait 60s for crictl version
	I0429 20:06:41.753379   65980 ssh_runner.go:195] Run: which crictl
	I0429 20:06:41.757672   65980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:06:41.794045   65980 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:06:41.794150   65980 ssh_runner.go:195] Run: crio --version
	I0429 20:06:41.831177   65980 ssh_runner.go:195] Run: crio --version
	I0429 20:06:41.865125   65980 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:06:38.049006   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:40.050003   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:42.050213   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:41.866698   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:41.869477   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:41.869815   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:41.869848   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:41.870107   65980 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0429 20:06:41.874917   65980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:41.889196   65980 kubeadm.go:877] updating cluster {Name:embed-certs-161370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:06:41.889353   65980 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:06:41.889423   65980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:41.936285   65980 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:06:41.936352   65980 ssh_runner.go:195] Run: which lz4
	I0429 20:06:41.941893   65980 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:06:41.947071   65980 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:06:41.947112   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:06:40.399024   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:40.899056   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.399275   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.899285   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.399200   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.899243   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.398590   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.899346   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.143962   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:41.645981   66875 node_ready.go:49] node "default-k8s-diff-port-866143" has status "Ready":"True"
	I0429 20:06:41.646007   66875 node_ready.go:38] duration metric: took 7.007388661s for node "default-k8s-diff-port-866143" to be "Ready" ...
	I0429 20:06:41.646018   66875 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:41.652664   66875 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.657667   66875 pod_ready.go:92] pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.657685   66875 pod_ready.go:81] duration metric: took 4.993051ms for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.657694   66875 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.662632   66875 pod_ready.go:92] pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.662650   66875 pod_ready.go:81] duration metric: took 4.950519ms for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.662658   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.667488   66875 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.667509   66875 pod_ready.go:81] duration metric: took 4.844299ms for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.667520   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.672480   66875 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.672501   66875 pod_ready.go:81] duration metric: took 4.974639ms for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.672512   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:42.042828   66875 pod_ready.go:92] pod "kube-proxy-zddtx" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:42.042856   66875 pod_ready.go:81] duration metric: took 370.336555ms for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:42.042868   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.051930   66875 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:44.548970   66875 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:44.548999   66875 pod_ready.go:81] duration metric: took 2.506120519s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.549011   66875 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.051077   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:46.052233   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:43.759688   65980 crio.go:462] duration metric: took 1.817838869s to copy over tarball
	I0429 20:06:43.759784   65980 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:46.405802   65980 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.64598022s)
	I0429 20:06:46.405851   65980 crio.go:469] duration metric: took 2.646122331s to extract the tarball
	I0429 20:06:46.405861   65980 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:46.444700   65980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:46.503047   65980 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 20:06:46.503086   65980 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:06:46.503098   65980 kubeadm.go:928] updating node { 192.168.50.184 8443 v1.30.0 crio true true} ...
	I0429 20:06:46.503234   65980 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-161370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:46.503334   65980 ssh_runner.go:195] Run: crio config
	I0429 20:06:46.563489   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:06:46.563511   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:46.563523   65980 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:46.563542   65980 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-161370 NodeName:embed-certs-161370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:06:46.563662   65980 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-161370"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:46.563719   65980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:06:46.576288   65980 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:46.576350   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:46.586807   65980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0429 20:06:46.605883   65980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:46.626741   65980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0429 20:06:46.647223   65980 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:46.652262   65980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:46.667095   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:46.804937   65980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:46.831022   65980 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370 for IP: 192.168.50.184
	I0429 20:06:46.831048   65980 certs.go:194] generating shared ca certs ...
	I0429 20:06:46.831067   65980 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:46.831251   65980 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:46.831295   65980 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:46.831306   65980 certs.go:256] generating profile certs ...
	I0429 20:06:46.831385   65980 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/client.key
	I0429 20:06:46.831440   65980 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.key.9384fac7
	I0429 20:06:46.831476   65980 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.key
	I0429 20:06:46.831582   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:46.831610   65980 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:46.831617   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:46.831635   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:46.831662   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:46.831691   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:46.831729   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:46.832571   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:46.896363   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:46.939336   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:46.976256   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:47.007777   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0429 20:06:47.045019   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:47.079584   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:47.114002   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:06:47.142163   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:47.170063   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:47.199098   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:47.228985   65980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:47.250928   65980 ssh_runner.go:195] Run: openssl version
	I0429 20:06:47.258215   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:47.271653   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.277100   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.277183   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.283876   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:47.297519   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:47.311104   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.316347   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.316408   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.322992   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:47.337744   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:47.351332   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.356912   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.356964   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.363339   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:47.378501   65980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:47.383995   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:47.391157   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:47.398039   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:47.405117   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:47.412125   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:47.419257   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:47.425917   65980 kubeadm.go:391] StartCluster: {Name:embed-certs-161370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:47.426009   65980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:47.426049   65980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:47.469133   65980 cri.go:89] found id: ""
	I0429 20:06:47.469216   65980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:47.481852   65980 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:47.481878   65980 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:47.481883   65980 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:47.481926   65980 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:47.495254   65980 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:47.496760   65980 kubeconfig.go:125] found "embed-certs-161370" server: "https://192.168.50.184:8443"
	I0429 20:06:47.499898   65980 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:47.511866   65980 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.184
	I0429 20:06:47.511903   65980 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:47.511917   65980 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:47.511972   65980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:47.563879   65980 cri.go:89] found id: ""
	I0429 20:06:47.563956   65980 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:47.583490   65980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:47.595867   65980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:47.595893   65980 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:47.595947   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:47.608218   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:47.608283   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:47.620329   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:47.631394   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:47.631527   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:47.643107   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:47.654164   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:47.654233   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:47.665890   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:47.676817   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:47.676859   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:47.688608   65980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:47.700068   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:45.398908   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:45.898619   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.398795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.899058   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.399257   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.899269   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.398874   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.898653   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.399305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.898855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.556692   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:49.056546   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:48.550949   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:50.551905   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:47.821391   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.623284   65980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.31791052s)
	I0429 20:06:49.623343   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.870630   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.950525   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:50.061240   65980 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:50.061331   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.562165   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.062299   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.139853   65980 api_server.go:72] duration metric: took 1.078602354s to wait for apiserver process to appear ...
	I0429 20:06:51.139883   65980 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:06:51.139905   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:51.140472   65980 api_server.go:269] stopped: https://192.168.50.184:8443/healthz: Get "https://192.168.50.184:8443/healthz": dial tcp 192.168.50.184:8443: connect: connection refused
	I0429 20:06:51.640813   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:50.398577   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.899284   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.399361   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.899134   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.399211   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.898733   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.399280   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.898915   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.399264   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.898840   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.057650   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:53.559429   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:53.049570   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:55.049866   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:57.050558   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:54.540707   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:54.540765   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:54.540797   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:54.618982   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:54.619016   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:54.640333   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:54.674491   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:54.674542   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:55.140955   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:55.157479   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:55.157517   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:55.639999   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:55.646278   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:55.646311   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:56.140938   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:56.147336   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:56.147371   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:56.640927   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:56.647027   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:56.647054   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:57.140894   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:57.145193   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:57.145236   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:57.640842   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:57.645453   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:57.645478   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:58.140524   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:58.146317   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0429 20:06:58.153972   65980 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:58.154011   65980 api_server.go:131] duration metric: took 7.014120443s to wait for apiserver health ...
	I0429 20:06:58.154028   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:06:58.154036   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:58.155341   65980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:55.398622   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:55.898563   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.399306   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.898473   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.899278   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.399121   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.899291   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.399197   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.898901   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.056503   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:58.056988   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:59.053737   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:01.555480   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:58.156794   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:58.176870   65980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:58.215333   65980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:58.230619   65980 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:58.230655   65980 system_pods.go:61] "coredns-7db6d8ff4d-wjfff" [bd92e456-b538-49ae-984b-c6bcea6add30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:58.230667   65980 system_pods.go:61] "etcd-embed-certs-161370" [da2d022f-33c4-49b7-b997-a6783043f3e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:58.230675   65980 system_pods.go:61] "kube-apiserver-embed-certs-161370" [032913c9-bb91-46ba-ad4d-a4d5b63d806f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:58.230681   65980 system_pods.go:61] "kube-controller-manager-embed-certs-161370" [2f3ae1ff-0688-4c70-a888-5e1e640f64bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:58.230685   65980 system_pods.go:61] "kube-proxy-9kmg8" [01bbd2ca-24d2-4c7c-b4ea-79604ac3f486] Running
	I0429 20:06:58.230689   65980 system_pods.go:61] "kube-scheduler-embed-certs-161370" [c88ab7cc-1aef-48cb-814e-eff8e946885c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:58.230694   65980 system_pods.go:61] "metrics-server-569cc877fc-c4h7f" [bf1cae8d-cca1-4573-935f-e60118ca9575] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:58.230698   65980 system_pods.go:61] "storage-provisioner" [1686a084-f28b-4b26-b748-85a2a3613dde] Running
	I0429 20:06:58.230703   65980 system_pods.go:74] duration metric: took 15.348727ms to wait for pod list to return data ...
	I0429 20:06:58.230713   65980 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:58.233411   65980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:58.233436   65980 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:58.233447   65980 node_conditions.go:105] duration metric: took 2.729694ms to run NodePressure ...
	I0429 20:06:58.233466   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:58.532729   65980 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:58.538018   65980 kubeadm.go:733] kubelet initialised
	I0429 20:06:58.538038   65980 kubeadm.go:734] duration metric: took 5.283028ms waiting for restarted kubelet to initialise ...
	I0429 20:06:58.538046   65980 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:58.544267   65980 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:00.553501   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:00.398537   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.899359   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.399125   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.899428   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.399457   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.899355   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.399421   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.899376   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.399331   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.899263   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.555517   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:02.557429   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:05.056216   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:04.049941   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:06.051285   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:03.069330   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:03.554710   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.554732   65980 pod_ready.go:81] duration metric: took 5.010440873s for pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.554742   65980 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.562277   65980 pod_ready.go:92] pod "etcd-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.562298   65980 pod_ready.go:81] duration metric: took 7.550156ms for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.562306   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.567038   65980 pod_ready.go:92] pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.567060   65980 pod_ready.go:81] duration metric: took 4.748002ms for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.567069   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.573632   65980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.573664   65980 pod_ready.go:81] duration metric: took 1.006574407s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.573675   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9kmg8" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.578356   65980 pod_ready.go:92] pod "kube-proxy-9kmg8" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.578377   65980 pod_ready.go:81] duration metric: took 4.694437ms for pod "kube-proxy-9kmg8" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.578388   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.749703   65980 pod_ready.go:92] pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.749733   65980 pod_ready.go:81] duration metric: took 171.336391ms for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.749750   65980 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:06.757041   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:05.398458   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:05.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.399205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.399308   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.898749   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:08.399182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:08.399271   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:08.448015   66615 cri.go:89] found id: ""
	I0429 20:07:08.448041   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.448049   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:08.448055   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:08.448103   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:08.491239   66615 cri.go:89] found id: ""
	I0429 20:07:08.491265   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.491274   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:08.491280   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:08.491330   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:08.541203   66615 cri.go:89] found id: ""
	I0429 20:07:08.541226   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.541234   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:08.541239   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:08.541300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:08.584370   66615 cri.go:89] found id: ""
	I0429 20:07:08.584393   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.584401   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:08.584407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:08.584469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:08.625126   66615 cri.go:89] found id: ""
	I0429 20:07:08.625158   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.625169   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:08.625182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:08.625246   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:08.666987   66615 cri.go:89] found id: ""
	I0429 20:07:08.667018   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.667032   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:08.667039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:08.667105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:08.712363   66615 cri.go:89] found id: ""
	I0429 20:07:08.712394   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.712405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:08.712413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:08.712471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:08.762122   66615 cri.go:89] found id: ""
	I0429 20:07:08.762151   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.762170   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:08.762180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:08.762196   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:08.808218   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:08.808246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:08.867278   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:08.867317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:08.884230   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:08.884266   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:09.018183   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:09.018208   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:09.018224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:07.555443   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:09.557653   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:08.551823   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.051232   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:09.257687   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.758829   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.587112   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:11.603711   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:11.603781   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:11.651087   66615 cri.go:89] found id: ""
	I0429 20:07:11.651115   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.651123   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:11.651128   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:11.651192   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:11.691888   66615 cri.go:89] found id: ""
	I0429 20:07:11.691914   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.691921   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:11.691928   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:11.691976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:11.733411   66615 cri.go:89] found id: ""
	I0429 20:07:11.733441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.733452   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:11.733460   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:11.733517   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:11.774620   66615 cri.go:89] found id: ""
	I0429 20:07:11.774648   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.774659   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:11.774666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:11.774729   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:11.821410   66615 cri.go:89] found id: ""
	I0429 20:07:11.821441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.821449   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:11.821455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:11.821502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:11.864699   66615 cri.go:89] found id: ""
	I0429 20:07:11.864730   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.864741   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:11.864749   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:11.864809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:11.904637   66615 cri.go:89] found id: ""
	I0429 20:07:11.904678   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.904687   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:11.904693   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:11.904742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:11.970914   66615 cri.go:89] found id: ""
	I0429 20:07:11.970945   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.970957   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:11.970968   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:11.970984   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:12.024185   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:12.024226   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:12.040319   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:12.040349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:12.137888   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:12.137915   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:12.137941   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:12.210256   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:12.210290   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:14.758756   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:14.775321   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:14.775386   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:14.812637   66615 cri.go:89] found id: ""
	I0429 20:07:14.812662   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.812672   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:14.812679   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:14.812735   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:14.851503   66615 cri.go:89] found id: ""
	I0429 20:07:14.851536   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.851547   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:14.851554   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:14.851613   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:14.885708   66615 cri.go:89] found id: ""
	I0429 20:07:14.885739   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.885749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:14.885756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:14.885817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:14.926133   66615 cri.go:89] found id: ""
	I0429 20:07:14.926162   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.926173   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:14.926181   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:14.926240   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:12.056093   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.056500   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:13.549924   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:15.550544   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.257394   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:16.756833   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.967553   66615 cri.go:89] found id: ""
	I0429 20:07:14.967582   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.967593   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:14.967601   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:14.967659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:15.006174   66615 cri.go:89] found id: ""
	I0429 20:07:15.006199   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.006207   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:15.006218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:15.006293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:15.046916   66615 cri.go:89] found id: ""
	I0429 20:07:15.046940   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.046947   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:15.046953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:15.047009   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:15.089229   66615 cri.go:89] found id: ""
	I0429 20:07:15.089256   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.089266   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:15.089278   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:15.089298   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:15.143518   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:15.143561   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:15.162742   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:15.162769   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:15.242850   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:15.242872   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:15.242884   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:15.315783   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:15.315825   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:17.863336   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:17.877802   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:17.877869   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:17.935714   66615 cri.go:89] found id: ""
	I0429 20:07:17.935738   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.935746   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:17.935754   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:17.935810   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:17.988496   66615 cri.go:89] found id: ""
	I0429 20:07:17.988529   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.988540   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:17.988547   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:17.988610   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:18.030695   66615 cri.go:89] found id: ""
	I0429 20:07:18.030726   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.030737   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:18.030745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:18.030822   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:18.077452   66615 cri.go:89] found id: ""
	I0429 20:07:18.077481   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.077491   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:18.077498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:18.077561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:18.120102   66615 cri.go:89] found id: ""
	I0429 20:07:18.120127   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.120136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:18.120141   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:18.120200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:18.163440   66615 cri.go:89] found id: ""
	I0429 20:07:18.163469   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.163480   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:18.163487   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:18.163549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:18.202650   66615 cri.go:89] found id: ""
	I0429 20:07:18.202680   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.202693   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:18.202699   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:18.202760   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:18.244378   66615 cri.go:89] found id: ""
	I0429 20:07:18.244408   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.244418   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:18.244429   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:18.244446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:18.289246   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:18.289279   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:18.343382   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:18.343425   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:18.359070   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:18.359103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:18.440316   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:18.440337   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:18.440351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:16.555711   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.555851   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.051297   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:20.551594   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.756941   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:20.756974   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:22.757155   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:21.019552   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:21.036407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:21.036523   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:21.083148   66615 cri.go:89] found id: ""
	I0429 20:07:21.083171   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.083179   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:21.083184   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:21.083231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:21.129382   66615 cri.go:89] found id: ""
	I0429 20:07:21.129415   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.129426   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:21.129434   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:21.129502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:21.172978   66615 cri.go:89] found id: ""
	I0429 20:07:21.173007   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.173015   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:21.173020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:21.173068   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:21.218124   66615 cri.go:89] found id: ""
	I0429 20:07:21.218159   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.218171   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:21.218178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:21.218243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:21.260603   66615 cri.go:89] found id: ""
	I0429 20:07:21.260640   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.260651   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:21.260658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:21.260723   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:21.302351   66615 cri.go:89] found id: ""
	I0429 20:07:21.302386   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.302398   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:21.302407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:21.302498   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:21.347003   66615 cri.go:89] found id: ""
	I0429 20:07:21.347028   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.347037   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:21.347043   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:21.347098   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:21.388202   66615 cri.go:89] found id: ""
	I0429 20:07:21.388236   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.388245   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:21.388257   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:21.388272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:21.442706   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:21.442744   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:21.457453   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:21.457489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:21.539669   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:21.539695   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:21.539707   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:21.625210   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:21.625247   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.173256   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:24.189920   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:24.189990   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:24.236730   66615 cri.go:89] found id: ""
	I0429 20:07:24.236761   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.236772   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:24.236779   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:24.236843   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:24.279031   66615 cri.go:89] found id: ""
	I0429 20:07:24.279055   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.279062   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:24.279067   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:24.279112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:24.321622   66615 cri.go:89] found id: ""
	I0429 20:07:24.321647   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.321657   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:24.321665   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:24.321726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:24.360884   66615 cri.go:89] found id: ""
	I0429 20:07:24.360911   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.360919   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:24.360924   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:24.360983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:24.414439   66615 cri.go:89] found id: ""
	I0429 20:07:24.414463   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.414472   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:24.414477   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:24.414559   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:24.456994   66615 cri.go:89] found id: ""
	I0429 20:07:24.457023   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.457033   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:24.457041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:24.457107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:24.497991   66615 cri.go:89] found id: ""
	I0429 20:07:24.498026   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.498036   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:24.498044   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:24.498137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:24.539375   66615 cri.go:89] found id: ""
	I0429 20:07:24.539415   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.539426   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:24.539438   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:24.539453   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:24.661778   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:24.661804   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:24.661820   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:24.748180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:24.748215   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.795963   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:24.795999   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:24.851485   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:24.851524   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:20.556543   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:22.556775   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:24.559759   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:23.052715   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:25.550857   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.551209   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:25.256195   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.258199   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.367869   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:27.385633   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:27.385716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:27.423181   66615 cri.go:89] found id: ""
	I0429 20:07:27.423210   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.423222   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:27.423233   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:27.423293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:27.467385   66615 cri.go:89] found id: ""
	I0429 20:07:27.467419   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.467432   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:27.467439   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:27.467503   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:27.506171   66615 cri.go:89] found id: ""
	I0429 20:07:27.506204   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.506216   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:27.506223   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:27.506272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:27.545043   66615 cri.go:89] found id: ""
	I0429 20:07:27.545066   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.545074   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:27.545080   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:27.545136   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:27.592279   66615 cri.go:89] found id: ""
	I0429 20:07:27.592306   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.592314   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:27.592320   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:27.592379   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:27.628569   66615 cri.go:89] found id: ""
	I0429 20:07:27.628595   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.628604   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:27.628612   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:27.628659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:27.667937   66615 cri.go:89] found id: ""
	I0429 20:07:27.667967   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.667978   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:27.667985   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:27.668047   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:27.708813   66615 cri.go:89] found id: ""
	I0429 20:07:27.708844   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.708853   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:27.708861   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:27.708876   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:27.789589   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:27.789625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:27.837147   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:27.837180   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:27.891928   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:27.891956   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:27.906162   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:27.906188   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:27.983738   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:27.057372   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:29.555872   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:30.049373   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:32.052279   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:29.758675   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:32.257486   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:30.484404   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:30.503968   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:30.504041   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:30.553070   66615 cri.go:89] found id: ""
	I0429 20:07:30.553099   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.553111   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:30.553118   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:30.553180   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:30.609226   66615 cri.go:89] found id: ""
	I0429 20:07:30.609253   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.609262   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:30.609267   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:30.609324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:30.658359   66615 cri.go:89] found id: ""
	I0429 20:07:30.658384   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.658395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:30.658401   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:30.658459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:30.710024   66615 cri.go:89] found id: ""
	I0429 20:07:30.710048   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.710058   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:30.710114   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:30.710173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:30.752361   66615 cri.go:89] found id: ""
	I0429 20:07:30.752388   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.752398   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:30.752405   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:30.752469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:30.793311   66615 cri.go:89] found id: ""
	I0429 20:07:30.793333   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.793341   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:30.793347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:30.793394   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:30.832371   66615 cri.go:89] found id: ""
	I0429 20:07:30.832400   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.832411   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:30.832417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:30.832469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:30.871183   66615 cri.go:89] found id: ""
	I0429 20:07:30.871215   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.871226   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:30.871237   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:30.871253   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:30.929909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:30.929947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:30.944454   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:30.944482   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:31.022060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:31.022100   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:31.022116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:31.104142   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:31.104185   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:33.651167   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:33.667888   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:33.667948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:33.708455   66615 cri.go:89] found id: ""
	I0429 20:07:33.708484   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.708495   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:33.708502   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:33.708561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:33.747578   66615 cri.go:89] found id: ""
	I0429 20:07:33.747602   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.747611   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:33.747616   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:33.747661   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:33.796005   66615 cri.go:89] found id: ""
	I0429 20:07:33.796036   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.796056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:33.796064   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:33.796128   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:33.836238   66615 cri.go:89] found id: ""
	I0429 20:07:33.836263   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.836271   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:33.836276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:33.836324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:33.877010   66615 cri.go:89] found id: ""
	I0429 20:07:33.877043   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.877056   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:33.877065   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:33.877137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:33.919690   66615 cri.go:89] found id: ""
	I0429 20:07:33.919714   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.919722   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:33.919727   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:33.919797   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:33.959857   66615 cri.go:89] found id: ""
	I0429 20:07:33.959889   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.959900   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:33.959907   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:33.959989   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:33.996349   66615 cri.go:89] found id: ""
	I0429 20:07:33.996376   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.996386   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:33.996396   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:33.996433   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:34.010773   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:34.010808   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:34.091581   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:34.091599   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:34.091611   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:34.173266   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:34.173299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:34.221447   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:34.221479   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:32.055352   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.056364   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.550100   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.550663   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.756264   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.756583   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.776486   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:36.791630   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:36.791764   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:36.837475   66615 cri.go:89] found id: ""
	I0429 20:07:36.837503   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.837513   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:36.837521   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:36.837607   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.879902   66615 cri.go:89] found id: ""
	I0429 20:07:36.879936   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.879947   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:36.879954   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:36.880021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:36.918566   66615 cri.go:89] found id: ""
	I0429 20:07:36.918594   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.918608   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:36.918613   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:36.918659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:36.958876   66615 cri.go:89] found id: ""
	I0429 20:07:36.958937   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.958948   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:36.958959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:36.959008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:36.998790   66615 cri.go:89] found id: ""
	I0429 20:07:36.998820   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.998845   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:36.998864   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:36.998932   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:37.036933   66615 cri.go:89] found id: ""
	I0429 20:07:37.036962   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.036972   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:37.036979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:37.037024   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:37.076560   66615 cri.go:89] found id: ""
	I0429 20:07:37.076597   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.076609   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:37.076616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:37.076688   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:37.118324   66615 cri.go:89] found id: ""
	I0429 20:07:37.118351   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.118360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:37.118368   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:37.118380   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:37.194671   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:37.194714   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:37.236269   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:37.236300   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:37.297006   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:37.297061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:37.312696   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:37.312723   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:37.387132   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:39.888111   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:39.903157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:39.903236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:39.945913   66615 cri.go:89] found id: ""
	I0429 20:07:39.945945   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.945956   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:39.945980   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:39.946076   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.056553   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:38.057230   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:39.050274   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:41.053502   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:38.756717   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:40.762297   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:39.986494   66615 cri.go:89] found id: ""
	I0429 20:07:39.986521   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.986530   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:39.986538   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:39.986598   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:40.031481   66615 cri.go:89] found id: ""
	I0429 20:07:40.031520   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.031531   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:40.031539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:40.031604   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:40.076792   66615 cri.go:89] found id: ""
	I0429 20:07:40.076816   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.076824   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:40.076830   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:40.076877   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:40.121020   66615 cri.go:89] found id: ""
	I0429 20:07:40.121050   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.121061   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:40.121068   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:40.121134   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:40.173189   66615 cri.go:89] found id: ""
	I0429 20:07:40.173221   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.173233   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:40.173241   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:40.173303   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:40.220190   66615 cri.go:89] found id: ""
	I0429 20:07:40.220212   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.220223   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:40.220229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:40.220293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:40.262552   66615 cri.go:89] found id: ""
	I0429 20:07:40.262579   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.262588   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:40.262600   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:40.262616   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:40.322249   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:40.322289   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:40.338703   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:40.338734   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:40.431311   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:40.431333   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:40.431345   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:40.518410   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:40.518446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:43.062556   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:43.077757   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:43.077844   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:43.129247   66615 cri.go:89] found id: ""
	I0429 20:07:43.129277   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.129289   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:43.129296   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:43.129364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:43.173474   66615 cri.go:89] found id: ""
	I0429 20:07:43.173501   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.173509   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:43.173514   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:43.173566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:43.218788   66615 cri.go:89] found id: ""
	I0429 20:07:43.218812   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.218820   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:43.218825   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:43.218873   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:43.259269   66615 cri.go:89] found id: ""
	I0429 20:07:43.259289   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.259297   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:43.259302   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:43.259362   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:43.301152   66615 cri.go:89] found id: ""
	I0429 20:07:43.301180   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.301189   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:43.301195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:43.301244   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:43.338183   66615 cri.go:89] found id: ""
	I0429 20:07:43.338211   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.338222   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:43.338229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:43.338276   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:43.376919   66615 cri.go:89] found id: ""
	I0429 20:07:43.376946   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.376958   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:43.376966   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:43.377032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:43.417421   66615 cri.go:89] found id: ""
	I0429 20:07:43.417450   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.417457   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:43.417465   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:43.417478   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:43.470009   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:43.470040   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:43.486059   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:43.486109   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:43.561688   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:43.561709   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:43.561725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:43.649713   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:43.649750   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:40.555780   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.056758   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.552176   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:46.049393   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.256870   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:45.258520   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:47.757738   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:46.194996   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:46.210261   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:46.210342   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:46.249208   66615 cri.go:89] found id: ""
	I0429 20:07:46.249240   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.249253   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:46.249260   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:46.249336   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:46.287285   66615 cri.go:89] found id: ""
	I0429 20:07:46.287315   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.287328   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:46.287335   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:46.287397   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:46.327944   66615 cri.go:89] found id: ""
	I0429 20:07:46.327976   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.327988   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:46.327996   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:46.328061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:46.373875   66615 cri.go:89] found id: ""
	I0429 20:07:46.373899   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.373908   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:46.373914   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:46.373967   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:46.413748   66615 cri.go:89] found id: ""
	I0429 20:07:46.413774   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.413783   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:46.413789   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:46.413853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:46.459380   66615 cri.go:89] found id: ""
	I0429 20:07:46.459412   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.459424   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:46.459432   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:46.459496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:46.499833   66615 cri.go:89] found id: ""
	I0429 20:07:46.499861   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.499870   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:46.499876   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:46.499939   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:46.541025   66615 cri.go:89] found id: ""
	I0429 20:07:46.541055   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.541068   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:46.541080   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:46.541096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:46.601187   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:46.601224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:46.617399   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:46.617426   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:46.697076   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:46.697113   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:46.697129   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:46.783265   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:46.783303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.335795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:49.350030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:49.350116   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:49.390278   66615 cri.go:89] found id: ""
	I0429 20:07:49.390315   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.390326   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:49.390333   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:49.390388   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:49.431145   66615 cri.go:89] found id: ""
	I0429 20:07:49.431175   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.431186   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:49.431193   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:49.431252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:49.473965   66615 cri.go:89] found id: ""
	I0429 20:07:49.473997   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.474014   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:49.474022   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:49.474105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:49.515372   66615 cri.go:89] found id: ""
	I0429 20:07:49.515407   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.515419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:49.515427   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:49.515487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:49.552541   66615 cri.go:89] found id: ""
	I0429 20:07:49.552567   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.552576   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:49.552582   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:49.552650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:49.599628   66615 cri.go:89] found id: ""
	I0429 20:07:49.599660   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.599672   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:49.599680   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:49.599745   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:49.642705   66615 cri.go:89] found id: ""
	I0429 20:07:49.642741   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.642752   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:49.642759   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:49.642827   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:49.679864   66615 cri.go:89] found id: ""
	I0429 20:07:49.679888   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.679896   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:49.679905   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:49.679919   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:49.765967   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:49.765986   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:49.766010   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:49.852739   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:49.852779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.905586   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:49.905613   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:45.559781   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:48.059952   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:48.049788   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:50.548836   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.551059   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:50.256898   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.757213   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:49.959443   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:49.959474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.476677   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:52.491378   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:52.491458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:52.535801   66615 cri.go:89] found id: ""
	I0429 20:07:52.535827   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.535835   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:52.535841   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:52.535901   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:52.582895   66615 cri.go:89] found id: ""
	I0429 20:07:52.582932   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.582944   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:52.582952   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:52.583022   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:52.627070   66615 cri.go:89] found id: ""
	I0429 20:07:52.627096   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.627113   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:52.627120   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:52.627181   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:52.673312   66615 cri.go:89] found id: ""
	I0429 20:07:52.673339   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.673348   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:52.673353   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:52.673399   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:52.713099   66615 cri.go:89] found id: ""
	I0429 20:07:52.713124   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.713131   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:52.713139   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:52.713205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:52.761982   66615 cri.go:89] found id: ""
	I0429 20:07:52.762007   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.762017   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:52.762024   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:52.762108   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:52.801019   66615 cri.go:89] found id: ""
	I0429 20:07:52.801048   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.801059   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:52.801067   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:52.801141   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:52.842544   66615 cri.go:89] found id: ""
	I0429 20:07:52.842578   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.842602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:52.842613   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:52.842630   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:52.896409   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:52.896442   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.912625   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:52.912650   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:52.992231   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:52.992260   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:52.992276   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:53.077473   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:53.077507   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:50.555818   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.556860   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:54.557161   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:54.554094   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:57.049699   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:55.257406   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:57.257840   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:55.625557   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:55.640211   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:55.640284   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:55.683215   66615 cri.go:89] found id: ""
	I0429 20:07:55.683250   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.683259   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:55.683275   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:55.683341   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:55.730820   66615 cri.go:89] found id: ""
	I0429 20:07:55.730851   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.730862   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:55.730869   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:55.730928   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:55.771784   66615 cri.go:89] found id: ""
	I0429 20:07:55.771808   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.771816   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:55.771821   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:55.771866   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:55.814988   66615 cri.go:89] found id: ""
	I0429 20:07:55.815021   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.815034   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:55.815042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:55.815114   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:55.859293   66615 cri.go:89] found id: ""
	I0429 20:07:55.859327   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.859340   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:55.859349   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:55.859416   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:55.901802   66615 cri.go:89] found id: ""
	I0429 20:07:55.901833   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.901844   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:55.901852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:55.901921   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:55.943863   66615 cri.go:89] found id: ""
	I0429 20:07:55.943895   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.943905   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:55.943913   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:55.943977   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:55.986256   66615 cri.go:89] found id: ""
	I0429 20:07:55.986284   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.986296   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:55.986314   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:55.986332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:56.036710   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:56.036742   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:56.099909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:56.099945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:56.117630   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:56.117660   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:56.197396   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.197421   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:56.197436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:58.779065   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:58.794086   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:58.794168   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:58.844035   66615 cri.go:89] found id: ""
	I0429 20:07:58.844062   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.844070   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:58.844076   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:58.844133   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:58.887859   66615 cri.go:89] found id: ""
	I0429 20:07:58.887889   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.887900   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:58.887906   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:58.887991   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:58.929039   66615 cri.go:89] found id: ""
	I0429 20:07:58.929072   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.929083   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:58.929092   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:58.929152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:58.965930   66615 cri.go:89] found id: ""
	I0429 20:07:58.965975   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.965983   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:58.965989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:58.966061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:59.005583   66615 cri.go:89] found id: ""
	I0429 20:07:59.005616   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.005628   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:59.005638   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:59.005697   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:59.047964   66615 cri.go:89] found id: ""
	I0429 20:07:59.047994   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.048007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:59.048014   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:59.048077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:59.091851   66615 cri.go:89] found id: ""
	I0429 20:07:59.091891   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.091904   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:59.091909   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:59.091978   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:59.134843   66615 cri.go:89] found id: ""
	I0429 20:07:59.134874   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.134881   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:59.134890   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:59.134907   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:59.219048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:59.219084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:59.267404   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:59.267436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:59.322264   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:59.322303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:59.339196   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:59.339235   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:59.441904   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.558660   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.057214   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.054473   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.550825   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.756683   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.759031   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.942998   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:01.957442   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:01.957502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:02.002240   66615 cri.go:89] found id: ""
	I0429 20:08:02.002271   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.002283   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:02.002291   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:02.002353   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:02.051506   66615 cri.go:89] found id: ""
	I0429 20:08:02.051535   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.051546   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:02.051552   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:02.051611   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:02.093194   66615 cri.go:89] found id: ""
	I0429 20:08:02.093234   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.093247   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:02.093254   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:02.093317   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:02.134988   66615 cri.go:89] found id: ""
	I0429 20:08:02.135016   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.135027   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:02.135034   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:02.135099   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:02.182954   66615 cri.go:89] found id: ""
	I0429 20:08:02.182982   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.182993   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:02.183000   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:02.183063   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:02.227778   66615 cri.go:89] found id: ""
	I0429 20:08:02.227807   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.227817   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:02.227826   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:02.227888   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:02.265593   66615 cri.go:89] found id: ""
	I0429 20:08:02.265624   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.265634   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:02.265641   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:02.265701   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:02.306520   66615 cri.go:89] found id: ""
	I0429 20:08:02.306550   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.306558   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:02.306566   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:02.306578   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:02.323806   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:02.323844   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:02.407110   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:02.407140   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:02.407153   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:02.493755   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:02.493791   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:02.538610   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:02.538640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:01.556084   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:03.556487   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:03.551788   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:05.553047   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:04.257831   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:06.756438   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:05.096630   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:05.111112   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:05.111173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:05.151237   66615 cri.go:89] found id: ""
	I0429 20:08:05.151268   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.151279   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:05.151286   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:05.151370   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:05.205344   66615 cri.go:89] found id: ""
	I0429 20:08:05.205379   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.205389   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:05.205396   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:05.205478   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:05.244394   66615 cri.go:89] found id: ""
	I0429 20:08:05.244426   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.244438   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:05.244445   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:05.244504   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:05.285320   66615 cri.go:89] found id: ""
	I0429 20:08:05.285343   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.285350   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:05.285356   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:05.285404   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:05.327618   66615 cri.go:89] found id: ""
	I0429 20:08:05.327645   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.327657   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:05.327664   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:05.327742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:05.369152   66615 cri.go:89] found id: ""
	I0429 20:08:05.369178   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.369194   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:05.369208   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:05.369277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:05.407206   66615 cri.go:89] found id: ""
	I0429 20:08:05.407234   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.407243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:05.407248   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:05.407299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:05.447404   66615 cri.go:89] found id: ""
	I0429 20:08:05.447438   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.447449   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:05.447459   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:05.447475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:05.529660   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:05.529700   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.582510   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:05.582565   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:05.639300   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:05.639351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:05.656825   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:05.656860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:05.730863   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.231635   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:08.247722   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:08.247811   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:08.298354   66615 cri.go:89] found id: ""
	I0429 20:08:08.298382   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.298395   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:08.298401   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:08.298459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:08.339497   66615 cri.go:89] found id: ""
	I0429 20:08:08.339536   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.339549   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:08.339556   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:08.339609   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:08.379665   66615 cri.go:89] found id: ""
	I0429 20:08:08.379695   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.379705   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:08.379712   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:08.379786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:08.419698   66615 cri.go:89] found id: ""
	I0429 20:08:08.419722   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.419732   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:08.419739   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:08.419798   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:08.463901   66615 cri.go:89] found id: ""
	I0429 20:08:08.463935   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.463946   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:08.463953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:08.464028   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:08.504568   66615 cri.go:89] found id: ""
	I0429 20:08:08.504603   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.504617   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:08.504626   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:08.504695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:08.545634   66615 cri.go:89] found id: ""
	I0429 20:08:08.545661   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.545671   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:08.545678   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:08.545741   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:08.586936   66615 cri.go:89] found id: ""
	I0429 20:08:08.586965   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.586976   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:08.586987   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:08.587003   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:08.641755   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:08.641794   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:08.659798   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:08.659845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:08.744265   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.744288   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:08.744303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:08.823813   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:08.823860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.557172   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:07.558538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:10.057841   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:08.049902   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:10.050576   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:12.051331   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:08.757300   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:11.257697   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:11.375600   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:11.396286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:11.396351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:11.442737   66615 cri.go:89] found id: ""
	I0429 20:08:11.442781   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.442789   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:11.442797   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:11.442865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:11.484131   66615 cri.go:89] found id: ""
	I0429 20:08:11.484158   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.484167   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:11.484172   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:11.484231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:11.526647   66615 cri.go:89] found id: ""
	I0429 20:08:11.526684   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.526695   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:11.526705   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:11.526777   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:11.572001   66615 cri.go:89] found id: ""
	I0429 20:08:11.572028   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.572036   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:11.572042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:11.572100   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:11.618980   66615 cri.go:89] found id: ""
	I0429 20:08:11.619003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.619011   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:11.619016   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:11.619077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:11.667079   66615 cri.go:89] found id: ""
	I0429 20:08:11.667107   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.667115   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:11.667123   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:11.667198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:11.707967   66615 cri.go:89] found id: ""
	I0429 20:08:11.708003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.708013   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:11.708020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:11.708073   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:11.753024   66615 cri.go:89] found id: ""
	I0429 20:08:11.753053   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.753062   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:11.753070   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:11.753081   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:11.820171   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:11.820210   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:11.852234   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:11.852263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:11.971060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:11.971085   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:11.971097   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:12.049797   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:12.049845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:14.601181   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:14.621413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:14.621496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:14.677453   66615 cri.go:89] found id: ""
	I0429 20:08:14.677486   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.677498   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:14.677504   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:14.677562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:14.720517   66615 cri.go:89] found id: ""
	I0429 20:08:14.720548   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.720560   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:14.720571   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:14.720636   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:14.770186   66615 cri.go:89] found id: ""
	I0429 20:08:14.770211   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.770219   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:14.770225   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:14.770301   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:14.815286   66615 cri.go:89] found id: ""
	I0429 20:08:14.815310   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.815320   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:14.815327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:14.815389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:14.862625   66615 cri.go:89] found id: ""
	I0429 20:08:14.862651   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.862662   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:14.862669   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:14.862726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:14.910517   66615 cri.go:89] found id: ""
	I0429 20:08:14.910554   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.910565   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:14.910572   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:14.910634   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:14.951085   66615 cri.go:89] found id: ""
	I0429 20:08:14.951110   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.951119   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:14.951124   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:14.951173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:12.558191   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:15.056987   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:14.051423   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:16.051632   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:13.757001   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:16.257425   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:14.991414   66615 cri.go:89] found id: ""
	I0429 20:08:14.991443   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.991455   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:14.991464   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:14.991476   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:15.047551   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:15.047583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:15.063667   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:15.063692   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:15.141744   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:15.141820   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:15.141841   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:15.225676   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:15.225722   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:17.774459   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:17.793137   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:17.793210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:17.856725   66615 cri.go:89] found id: ""
	I0429 20:08:17.856756   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.856767   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:17.856774   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:17.856835   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:17.916510   66615 cri.go:89] found id: ""
	I0429 20:08:17.916542   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.916554   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:17.916561   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:17.916646   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:17.970835   66615 cri.go:89] found id: ""
	I0429 20:08:17.970867   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.970877   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:17.970884   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:17.970948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:18.013324   66615 cri.go:89] found id: ""
	I0429 20:08:18.013353   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.013366   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:18.013384   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:18.013458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:18.062930   66615 cri.go:89] found id: ""
	I0429 20:08:18.062957   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.062968   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:18.062974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:18.063040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:18.111792   66615 cri.go:89] found id: ""
	I0429 20:08:18.111820   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.111829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:18.111834   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:18.111911   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:18.160096   66615 cri.go:89] found id: ""
	I0429 20:08:18.160121   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.160129   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:18.160135   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:18.160198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:18.204012   66615 cri.go:89] found id: ""
	I0429 20:08:18.204044   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.204052   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:18.204062   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:18.204074   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:18.284288   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:18.284337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:18.340746   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:18.340779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:18.397612   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:18.397652   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:18.413425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:18.413455   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:18.493598   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:17.058215   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:19.556308   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:18.551175   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:20.551292   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:22.551637   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:18.757370   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:21.259192   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:20.994339   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:21.010199   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:21.010289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:21.052190   66615 cri.go:89] found id: ""
	I0429 20:08:21.052219   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.052230   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:21.052237   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:21.052300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:21.090838   66615 cri.go:89] found id: ""
	I0429 20:08:21.090870   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.090882   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:21.090889   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:21.090953   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:21.137997   66615 cri.go:89] found id: ""
	I0429 20:08:21.138044   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.138056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:21.138082   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:21.138171   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:21.176278   66615 cri.go:89] found id: ""
	I0429 20:08:21.176311   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.176323   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:21.176331   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:21.176390   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:21.213925   66615 cri.go:89] found id: ""
	I0429 20:08:21.213955   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.213966   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:21.213973   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:21.214039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:21.253815   66615 cri.go:89] found id: ""
	I0429 20:08:21.253842   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.253850   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:21.253857   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:21.253905   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:21.296521   66615 cri.go:89] found id: ""
	I0429 20:08:21.296553   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.296565   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:21.296573   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:21.296633   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:21.337114   66615 cri.go:89] found id: ""
	I0429 20:08:21.337143   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.337150   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:21.337158   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:21.337177   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.384860   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:21.384901   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:21.443837   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:21.443899   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:21.460084   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:21.460116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:21.541230   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:21.541262   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:21.541278   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.132057   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:24.148381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:24.148458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:24.192469   66615 cri.go:89] found id: ""
	I0429 20:08:24.192499   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.192510   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:24.192516   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:24.192568   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:24.232150   66615 cri.go:89] found id: ""
	I0429 20:08:24.232177   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.232188   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:24.232195   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:24.232260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:24.272679   66615 cri.go:89] found id: ""
	I0429 20:08:24.272705   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.272714   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:24.272719   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:24.272772   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:24.317114   66615 cri.go:89] found id: ""
	I0429 20:08:24.317137   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.317145   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:24.317151   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:24.317200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:24.362251   66615 cri.go:89] found id: ""
	I0429 20:08:24.362279   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.362287   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:24.362294   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:24.362346   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:24.405696   66615 cri.go:89] found id: ""
	I0429 20:08:24.405721   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.405729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:24.405734   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:24.405828   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:24.446837   66615 cri.go:89] found id: ""
	I0429 20:08:24.446864   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.446871   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:24.446878   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:24.446929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:24.493416   66615 cri.go:89] found id: ""
	I0429 20:08:24.493445   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.493454   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:24.493462   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:24.493475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:24.555657   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:24.555693   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:24.572297   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:24.572328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:24.658463   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:24.658487   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:24.658499   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.752064   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:24.752103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.557948   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:24.056339   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:25.050530   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:27.554744   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:23.758156   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:26.261403   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:27.303812   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:27.319304   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:27.319373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:27.360473   66615 cri.go:89] found id: ""
	I0429 20:08:27.360509   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.360521   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:27.360529   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:27.360595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:27.404619   66615 cri.go:89] found id: ""
	I0429 20:08:27.404651   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.404668   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:27.404675   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:27.404742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:27.447464   66615 cri.go:89] found id: ""
	I0429 20:08:27.447490   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.447498   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:27.447503   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:27.447556   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:27.489197   66615 cri.go:89] found id: ""
	I0429 20:08:27.489235   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.489246   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:27.489253   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:27.489323   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:27.534354   66615 cri.go:89] found id: ""
	I0429 20:08:27.534387   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.534397   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:27.534404   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:27.534470   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:27.580721   66615 cri.go:89] found id: ""
	I0429 20:08:27.580751   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.580762   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:27.580769   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:27.580841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:27.620000   66615 cri.go:89] found id: ""
	I0429 20:08:27.620033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.620041   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:27.620046   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:27.620096   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:27.659000   66615 cri.go:89] found id: ""
	I0429 20:08:27.659033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.659041   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:27.659050   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:27.659062   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:27.739202   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:27.739241   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:27.784761   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:27.784807   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:27.842707   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:27.842748   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:27.859471   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:27.859498   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:27.942686   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:26.058098   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:28.059648   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.056692   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:32.550893   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:28.757412   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.759070   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.443410   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:30.460332   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:30.460417   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:30.497715   66615 cri.go:89] found id: ""
	I0429 20:08:30.497752   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.497764   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:30.497772   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:30.497841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:30.539376   66615 cri.go:89] found id: ""
	I0429 20:08:30.539409   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.539419   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:30.539426   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:30.539492   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:30.587567   66615 cri.go:89] found id: ""
	I0429 20:08:30.587596   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.587606   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:30.587616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:30.587679   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:30.626198   66615 cri.go:89] found id: ""
	I0429 20:08:30.626228   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.626238   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:30.626246   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:30.626313   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:30.665798   66615 cri.go:89] found id: ""
	I0429 20:08:30.665829   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.665837   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:30.665843   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:30.665909   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:30.708627   66615 cri.go:89] found id: ""
	I0429 20:08:30.708659   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.708671   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:30.708679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:30.708762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:30.754190   66615 cri.go:89] found id: ""
	I0429 20:08:30.754220   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.754230   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:30.754236   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:30.754295   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:30.797383   66615 cri.go:89] found id: ""
	I0429 20:08:30.797410   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.797421   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:30.797432   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:30.797447   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.843485   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:30.843512   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:30.900081   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:30.900118   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:30.916095   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:30.916125   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:30.995509   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:30.995529   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:30.995541   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:33.584596   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:33.600969   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:33.601058   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:33.643935   66615 cri.go:89] found id: ""
	I0429 20:08:33.643967   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.643979   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:33.643986   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:33.644049   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:33.681047   66615 cri.go:89] found id: ""
	I0429 20:08:33.681077   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.681085   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:33.681091   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:33.681160   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:33.726450   66615 cri.go:89] found id: ""
	I0429 20:08:33.726479   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.726490   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:33.726501   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:33.726561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:33.765237   66615 cri.go:89] found id: ""
	I0429 20:08:33.765264   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.765275   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:33.765281   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:33.765339   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:33.808333   66615 cri.go:89] found id: ""
	I0429 20:08:33.808366   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.808376   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:33.808383   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:33.808446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:33.854991   66615 cri.go:89] found id: ""
	I0429 20:08:33.855023   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.855034   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:33.855041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:33.855126   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:33.895405   66615 cri.go:89] found id: ""
	I0429 20:08:33.895434   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.895446   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:33.895455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:33.895521   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:33.937265   66615 cri.go:89] found id: ""
	I0429 20:08:33.937289   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.937297   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:33.937306   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:33.937324   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:33.991565   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:33.991594   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:34.006316   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:34.006343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:34.088734   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:34.088762   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:34.088776   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:34.180451   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:34.180489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.557020   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:33.058354   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:35.049638   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:37.051464   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:33.256955   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:35.257122   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:37.257629   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:36.727080   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:36.743038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:36.743124   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:36.785441   66615 cri.go:89] found id: ""
	I0429 20:08:36.785465   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.785475   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:36.785482   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:36.785542   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:36.828787   66615 cri.go:89] found id: ""
	I0429 20:08:36.828819   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.828829   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:36.828836   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:36.828896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:36.867712   66615 cri.go:89] found id: ""
	I0429 20:08:36.867738   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.867749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:36.867756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:36.867825   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:36.911435   66615 cri.go:89] found id: ""
	I0429 20:08:36.911462   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.911472   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:36.911478   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:36.911560   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:36.953803   66615 cri.go:89] found id: ""
	I0429 20:08:36.953828   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.953836   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:36.953842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:36.953903   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:36.990305   66615 cri.go:89] found id: ""
	I0429 20:08:36.990329   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.990339   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:36.990347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:36.990434   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:37.029177   66615 cri.go:89] found id: ""
	I0429 20:08:37.029206   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.029225   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:37.029232   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:37.029294   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:37.067583   66615 cri.go:89] found id: ""
	I0429 20:08:37.067605   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.067612   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:37.067619   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:37.067631   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:37.144739   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:37.144776   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:37.144788   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:37.227724   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:37.227762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:37.270383   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:37.270417   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:37.326858   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:37.326890   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:39.843323   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:39.859899   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:39.859961   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:39.903125   66615 cri.go:89] found id: ""
	I0429 20:08:39.903155   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.903164   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:39.903169   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:39.903243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:39.944271   66615 cri.go:89] found id: ""
	I0429 20:08:39.944300   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.944309   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:39.944314   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:39.944363   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:35.557115   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:38.056175   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.550339   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:42.048622   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.756355   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:42.255528   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.989934   66615 cri.go:89] found id: ""
	I0429 20:08:39.989964   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.989972   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:39.989978   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:39.990032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:40.025936   66615 cri.go:89] found id: ""
	I0429 20:08:40.025965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.025976   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:40.025983   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:40.026044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:40.065943   66615 cri.go:89] found id: ""
	I0429 20:08:40.065965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.065976   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:40.065984   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:40.066038   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:40.109986   66615 cri.go:89] found id: ""
	I0429 20:08:40.110018   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.110030   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:40.110038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:40.110115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:40.155610   66615 cri.go:89] found id: ""
	I0429 20:08:40.155716   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.155734   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:40.155745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:40.155803   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:40.196213   66615 cri.go:89] found id: ""
	I0429 20:08:40.196239   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.196246   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:40.196256   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:40.196272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:40.280330   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:40.280372   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:40.326774   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:40.326810   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:40.379438   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:40.379475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:40.395332   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:40.395362   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:40.504413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:43.005046   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:43.020464   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:43.020544   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:43.066403   66615 cri.go:89] found id: ""
	I0429 20:08:43.066432   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.066444   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:43.066452   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:43.066548   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:43.109732   66615 cri.go:89] found id: ""
	I0429 20:08:43.109760   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.109771   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:43.109778   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:43.109850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:43.158457   66615 cri.go:89] found id: ""
	I0429 20:08:43.158483   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.158492   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:43.158498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:43.158561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:43.207170   66615 cri.go:89] found id: ""
	I0429 20:08:43.207201   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.207213   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:43.207221   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:43.207281   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:43.246746   66615 cri.go:89] found id: ""
	I0429 20:08:43.246783   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.246804   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:43.246811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:43.246875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:43.292786   66615 cri.go:89] found id: ""
	I0429 20:08:43.292813   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.292824   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:43.292831   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:43.292896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:43.337509   66615 cri.go:89] found id: ""
	I0429 20:08:43.337537   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.337546   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:43.337551   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:43.337601   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:43.378446   66615 cri.go:89] found id: ""
	I0429 20:08:43.378473   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.378481   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:43.378490   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:43.378502   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:43.460438   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:43.460474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:43.503908   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:43.503945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:43.561661   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:43.561699   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:43.577924   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:43.577954   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:43.667006   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:40.555875   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:43.057183   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:44.049342   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.049873   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:44.256458   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.256554   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.168175   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:46.212494   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:46.212579   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:46.251567   66615 cri.go:89] found id: ""
	I0429 20:08:46.251593   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.251603   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:46.251610   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:46.251673   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:46.291913   66615 cri.go:89] found id: ""
	I0429 20:08:46.291943   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.291955   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:46.291962   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:46.292023   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:46.331801   66615 cri.go:89] found id: ""
	I0429 20:08:46.331827   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.331836   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:46.331842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:46.331899   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:46.375956   66615 cri.go:89] found id: ""
	I0429 20:08:46.375989   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.376001   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:46.376008   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:46.376090   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:46.425572   66615 cri.go:89] found id: ""
	I0429 20:08:46.425599   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.425609   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:46.425618   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:46.425681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:46.468161   66615 cri.go:89] found id: ""
	I0429 20:08:46.468226   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.468249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:46.468263   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:46.468433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:46.512163   66615 cri.go:89] found id: ""
	I0429 20:08:46.512193   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.512205   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:46.512212   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:46.512277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:46.556047   66615 cri.go:89] found id: ""
	I0429 20:08:46.556078   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.556088   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:46.556099   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:46.556111   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:46.609886   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:46.609921   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:46.625848   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:46.625878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:46.699005   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:46.699037   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:46.699053   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:46.783886   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:46.783923   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.331288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:49.344805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:49.344864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:49.381576   66615 cri.go:89] found id: ""
	I0429 20:08:49.381598   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.381605   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:49.381619   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:49.381667   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:49.418276   66615 cri.go:89] found id: ""
	I0429 20:08:49.418316   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.418329   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:49.418336   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:49.418389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:49.460147   66615 cri.go:89] found id: ""
	I0429 20:08:49.460177   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.460188   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:49.460195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:49.460253   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:49.500534   66615 cri.go:89] found id: ""
	I0429 20:08:49.500562   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.500569   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:49.500575   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:49.500632   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:49.538481   66615 cri.go:89] found id: ""
	I0429 20:08:49.538521   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.538534   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:49.538541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:49.538603   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:49.580192   66615 cri.go:89] found id: ""
	I0429 20:08:49.580218   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.580228   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:49.580234   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:49.580299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:49.616400   66615 cri.go:89] found id: ""
	I0429 20:08:49.616427   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.616437   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:49.616444   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:49.616551   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:49.652871   66615 cri.go:89] found id: ""
	I0429 20:08:49.652900   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.652918   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:49.652931   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:49.652947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:49.728173   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:49.728200   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:49.728212   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:49.813701   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:49.813749   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.855685   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:49.855712   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:49.906480   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:49.906514   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:45.559939   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.056008   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.056054   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.052578   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.550638   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.550910   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.257460   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.259418   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.757365   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.422430   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:52.437412   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:52.437488   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:52.476896   66615 cri.go:89] found id: ""
	I0429 20:08:52.476919   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.476927   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:52.476932   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:52.476976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:52.517266   66615 cri.go:89] found id: ""
	I0429 20:08:52.517298   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.517310   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:52.517318   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:52.517381   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:52.560886   66615 cri.go:89] found id: ""
	I0429 20:08:52.560909   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.560917   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:52.560922   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:52.560969   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:52.601362   66615 cri.go:89] found id: ""
	I0429 20:08:52.601398   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.601419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:52.601429   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:52.601506   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:52.639544   66615 cri.go:89] found id: ""
	I0429 20:08:52.639580   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.639591   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:52.639599   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:52.639652   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:52.681088   66615 cri.go:89] found id: ""
	I0429 20:08:52.681120   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.681130   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:52.681138   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:52.681204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:52.721777   66615 cri.go:89] found id: ""
	I0429 20:08:52.721802   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.721820   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:52.721828   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:52.721900   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:52.762823   66615 cri.go:89] found id: ""
	I0429 20:08:52.762845   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.762856   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:52.762863   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:52.762875   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:52.819291   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:52.819326   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:52.847120   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:52.847165   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:52.956274   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:52.956301   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:52.956317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:53.041636   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:53.041676   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:52.056558   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:54.555745   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.051656   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:57.549668   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.257083   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:57.757855   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.592636   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:55.607372   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:55.607449   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:55.643959   66615 cri.go:89] found id: ""
	I0429 20:08:55.643991   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.644000   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:55.644005   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:55.644061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:55.682272   66615 cri.go:89] found id: ""
	I0429 20:08:55.682304   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.682315   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:55.682323   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:55.682384   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:55.720157   66615 cri.go:89] found id: ""
	I0429 20:08:55.720189   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.720200   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:55.720207   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:55.720272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:55.761748   66615 cri.go:89] found id: ""
	I0429 20:08:55.761773   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.761781   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:55.761786   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:55.761842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:55.802377   66615 cri.go:89] found id: ""
	I0429 20:08:55.802405   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.802416   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:55.802423   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:55.802494   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:55.838986   66615 cri.go:89] found id: ""
	I0429 20:08:55.839016   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.839024   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:55.839030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:55.839077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:55.874991   66615 cri.go:89] found id: ""
	I0429 20:08:55.875022   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.875032   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:55.875039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:55.875106   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:55.913561   66615 cri.go:89] found id: ""
	I0429 20:08:55.913595   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.913607   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:55.913618   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:55.913633   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:55.965355   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:55.965391   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:55.981222   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:55.981259   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:56.056656   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:56.056685   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:56.056701   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.135276   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:56.135309   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:58.682855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:58.701679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:58.701769   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:58.760807   66615 cri.go:89] found id: ""
	I0429 20:08:58.760828   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.760841   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:58.760858   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:58.760910   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:58.835167   66615 cri.go:89] found id: ""
	I0429 20:08:58.835204   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.835216   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:58.835223   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:58.835289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:58.877367   66615 cri.go:89] found id: ""
	I0429 20:08:58.877398   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.877409   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:58.877417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:58.877483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:58.923726   66615 cri.go:89] found id: ""
	I0429 20:08:58.923751   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.923760   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:58.923766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:58.923817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:58.967780   66615 cri.go:89] found id: ""
	I0429 20:08:58.967804   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.967811   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:58.967816   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:58.967865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:59.010646   66615 cri.go:89] found id: ""
	I0429 20:08:59.010682   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.010690   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:59.010697   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:59.010759   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:59.057380   66615 cri.go:89] found id: ""
	I0429 20:08:59.057408   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.057418   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:59.057426   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:59.057483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:59.099669   66615 cri.go:89] found id: ""
	I0429 20:08:59.099698   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.099706   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:59.099715   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:59.099731   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:59.146831   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:59.146861   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:59.204232   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:59.204274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:59.219799   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:59.219824   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:59.305438   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:59.305465   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:59.305481   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.555976   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:58.557892   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:00.049511   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:02.050709   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:00.256064   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:02.257053   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:01.885861   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:01.900746   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:01.900808   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:01.942174   66615 cri.go:89] found id: ""
	I0429 20:09:01.942210   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.942218   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:01.942224   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:01.942285   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:01.986463   66615 cri.go:89] found id: ""
	I0429 20:09:01.986491   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.986502   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:01.986509   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:01.986570   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:02.026290   66615 cri.go:89] found id: ""
	I0429 20:09:02.026314   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.026321   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:02.026327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:02.026375   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:02.064239   66615 cri.go:89] found id: ""
	I0429 20:09:02.064259   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.064266   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:02.064271   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:02.064321   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:02.105807   66615 cri.go:89] found id: ""
	I0429 20:09:02.105838   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.105857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:02.105866   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:02.105926   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:02.144939   66615 cri.go:89] found id: ""
	I0429 20:09:02.144962   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.144970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:02.144975   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:02.145037   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:02.192866   66615 cri.go:89] found id: ""
	I0429 20:09:02.192891   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.192899   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:02.192905   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:02.192955   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:02.232485   66615 cri.go:89] found id: ""
	I0429 20:09:02.232515   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.232524   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:02.232533   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:02.232550   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:02.287374   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:02.287402   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:02.302979   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:02.303009   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:02.380693   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:02.380713   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:02.380725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:02.467048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:02.467084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:01.055311   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:03.055538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:05.056325   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:04.051014   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:06.556497   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:04.758329   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:07.256328   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:05.018176   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:05.033178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:05.033238   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:05.079008   66615 cri.go:89] found id: ""
	I0429 20:09:05.079034   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.079043   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:05.079050   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:05.079113   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:05.118620   66615 cri.go:89] found id: ""
	I0429 20:09:05.118642   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.118650   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:05.118655   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:05.118714   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:05.159603   66615 cri.go:89] found id: ""
	I0429 20:09:05.159646   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.159660   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:05.159666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:05.159733   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:05.200224   66615 cri.go:89] found id: ""
	I0429 20:09:05.200252   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.200262   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:05.200270   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:05.200344   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:05.246341   66615 cri.go:89] found id: ""
	I0429 20:09:05.246384   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.246396   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:05.246403   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:05.246471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:05.286126   66615 cri.go:89] found id: ""
	I0429 20:09:05.286153   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.286163   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:05.286171   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:05.286235   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:05.326911   66615 cri.go:89] found id: ""
	I0429 20:09:05.326941   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.326952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:05.326958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:05.327019   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:05.365564   66615 cri.go:89] found id: ""
	I0429 20:09:05.365592   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.365602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:05.365621   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:05.365637   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:05.445857   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:05.445877   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:05.445889   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:05.530129   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:05.530164   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:05.573936   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:05.573971   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:05.631263   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:05.631299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.147288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:08.162949   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:08.163021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:08.203009   66615 cri.go:89] found id: ""
	I0429 20:09:08.203033   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.203041   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:08.203047   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:08.203112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:08.241708   66615 cri.go:89] found id: ""
	I0429 20:09:08.241735   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.241744   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:08.241750   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:08.241801   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:08.283976   66615 cri.go:89] found id: ""
	I0429 20:09:08.284005   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.284017   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:08.284023   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:08.284091   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:08.323909   66615 cri.go:89] found id: ""
	I0429 20:09:08.323939   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.323951   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:08.323962   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:08.324031   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:08.363236   66615 cri.go:89] found id: ""
	I0429 20:09:08.363263   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.363271   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:08.363276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:08.363328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:08.401767   66615 cri.go:89] found id: ""
	I0429 20:09:08.401790   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.401798   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:08.401803   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:08.401851   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:08.443678   66615 cri.go:89] found id: ""
	I0429 20:09:08.443709   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.443726   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:08.443731   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:08.443791   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:08.489025   66615 cri.go:89] found id: ""
	I0429 20:09:08.489069   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.489103   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:08.489129   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:08.489163   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:08.543421   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:08.543462   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.560425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:08.560459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:08.642819   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:08.642840   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:08.642855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:08.726644   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:08.726682   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:07.555523   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.556138   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.049664   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.050246   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.256452   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.257458   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.277817   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:11.292340   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:11.292420   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:11.330721   66615 cri.go:89] found id: ""
	I0429 20:09:11.330756   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.330768   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:11.330776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:11.330850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:11.372057   66615 cri.go:89] found id: ""
	I0429 20:09:11.372089   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.372098   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:11.372103   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:11.372155   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:11.414786   66615 cri.go:89] found id: ""
	I0429 20:09:11.414814   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.414825   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:11.414832   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:11.414898   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:11.454934   66615 cri.go:89] found id: ""
	I0429 20:09:11.454961   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.454969   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:11.454974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:11.455039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:11.494169   66615 cri.go:89] found id: ""
	I0429 20:09:11.494200   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.494211   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:11.494217   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:11.494277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:11.541646   66615 cri.go:89] found id: ""
	I0429 20:09:11.541684   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.541694   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:11.541701   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:11.541766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:11.584025   66615 cri.go:89] found id: ""
	I0429 20:09:11.584055   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.584067   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:11.584075   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:11.584138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:11.622425   66615 cri.go:89] found id: ""
	I0429 20:09:11.622459   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.622471   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:11.622481   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:11.622493   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:11.676416   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:11.676450   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:11.693793   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:11.693822   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:11.771410   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:11.771437   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:11.771454   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:11.854969   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:11.855047   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:14.398871   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:14.415894   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:14.415983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:14.454718   66615 cri.go:89] found id: ""
	I0429 20:09:14.454752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.454763   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:14.454773   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:14.454836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:14.498562   66615 cri.go:89] found id: ""
	I0429 20:09:14.498591   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.498602   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:14.498609   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:14.498669   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:14.536357   66615 cri.go:89] found id: ""
	I0429 20:09:14.536384   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.536395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:14.536402   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:14.536460   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:14.577240   66615 cri.go:89] found id: ""
	I0429 20:09:14.577274   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.577284   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:14.577291   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:14.577372   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:14.617231   66615 cri.go:89] found id: ""
	I0429 20:09:14.617266   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.617279   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:14.617287   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:14.617355   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:14.659053   66615 cri.go:89] found id: ""
	I0429 20:09:14.659081   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.659090   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:14.659096   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:14.659145   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:14.708723   66615 cri.go:89] found id: ""
	I0429 20:09:14.708752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.708760   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:14.708766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:14.708814   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:14.753732   66615 cri.go:89] found id: ""
	I0429 20:09:14.753762   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.753773   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:14.753783   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:14.753798   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:14.771952   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:14.771985   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:14.842649   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:14.842680   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:14.842696   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:14.925565   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:14.925603   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:11.556903   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:14.057196   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:13.550999   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:16.054439   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:13.257735   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:15.756651   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:17.756760   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:14.975731   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:14.975765   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.528872   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:17.544373   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:17.544455   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:17.582977   66615 cri.go:89] found id: ""
	I0429 20:09:17.583001   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.583009   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:17.583014   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:17.583079   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:17.620322   66615 cri.go:89] found id: ""
	I0429 20:09:17.620352   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.620368   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:17.620373   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:17.620421   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:17.664339   66615 cri.go:89] found id: ""
	I0429 20:09:17.664367   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.664375   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:17.664381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:17.664433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:17.705150   66615 cri.go:89] found id: ""
	I0429 20:09:17.705175   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.705184   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:17.705189   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:17.705239   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:17.749713   66615 cri.go:89] found id: ""
	I0429 20:09:17.749738   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.749747   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:17.749752   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:17.749850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:17.791528   66615 cri.go:89] found id: ""
	I0429 20:09:17.791552   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.791560   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:17.791566   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:17.791615   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:17.834994   66615 cri.go:89] found id: ""
	I0429 20:09:17.835024   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.835035   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:17.835050   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:17.835107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:17.872194   66615 cri.go:89] found id: ""
	I0429 20:09:17.872226   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.872236   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:17.872248   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:17.872263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.926899   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:17.926936   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:17.944184   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:17.944218   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:18.029224   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:18.029246   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:18.029258   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:18.111112   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:18.111147   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:16.557282   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:19.056682   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:18.549106   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:20.550026   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:19.758897   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:22.257104   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:20.655965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:20.671420   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:20.671487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:20.710100   66615 cri.go:89] found id: ""
	I0429 20:09:20.710132   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.710144   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:20.710151   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:20.710221   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:20.748849   66615 cri.go:89] found id: ""
	I0429 20:09:20.748877   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.748888   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:20.748894   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:20.748956   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:20.788113   66615 cri.go:89] found id: ""
	I0429 20:09:20.788140   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.788151   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:20.788157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:20.788217   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:20.831432   66615 cri.go:89] found id: ""
	I0429 20:09:20.831455   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.831462   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:20.831470   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:20.831518   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:20.878156   66615 cri.go:89] found id: ""
	I0429 20:09:20.878183   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.878191   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:20.878197   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:20.878262   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:20.920691   66615 cri.go:89] found id: ""
	I0429 20:09:20.920718   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.920729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:20.920735   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:20.920795   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:20.960674   66615 cri.go:89] found id: ""
	I0429 20:09:20.960709   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.960719   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:20.960726   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:20.960786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:21.006462   66615 cri.go:89] found id: ""
	I0429 20:09:21.006486   66615 logs.go:276] 0 containers: []
	W0429 20:09:21.006495   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:21.006503   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:21.006518   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.060040   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:21.060076   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:21.077141   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:21.077171   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:21.157058   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:21.157083   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:21.157096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:21.265626   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:21.265662   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:23.813718   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:23.828338   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:23.828400   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:23.868730   66615 cri.go:89] found id: ""
	I0429 20:09:23.868760   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.868771   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:23.868776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:23.868842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:23.907919   66615 cri.go:89] found id: ""
	I0429 20:09:23.907941   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.907949   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:23.907956   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:23.908011   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:23.956769   66615 cri.go:89] found id: ""
	I0429 20:09:23.956794   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.956805   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:23.956811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:23.956875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:23.998578   66615 cri.go:89] found id: ""
	I0429 20:09:23.998612   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.998621   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:23.998628   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:23.998681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:24.037458   66615 cri.go:89] found id: ""
	I0429 20:09:24.037485   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.037492   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:24.037499   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:24.037562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:24.078305   66615 cri.go:89] found id: ""
	I0429 20:09:24.078336   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.078351   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:24.078358   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:24.078418   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:24.120100   66615 cri.go:89] found id: ""
	I0429 20:09:24.120129   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.120139   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:24.120147   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:24.120211   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:24.160953   66615 cri.go:89] found id: ""
	I0429 20:09:24.160988   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.161000   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:24.161012   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:24.161029   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:24.176654   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:24.176686   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:24.256631   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:24.256652   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:24.256668   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:24.335379   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:24.335424   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:24.379616   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:24.379649   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.556726   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:24.057483   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:23.050004   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:25.550882   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:27.551051   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:24.257726   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:26.757098   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:26.937283   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:26.956185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:26.956252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:26.997000   66615 cri.go:89] found id: ""
	I0429 20:09:26.997034   66615 logs.go:276] 0 containers: []
	W0429 20:09:26.997046   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:26.997053   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:26.997115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:27.042494   66615 cri.go:89] found id: ""
	I0429 20:09:27.042527   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.042538   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:27.042546   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:27.042608   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:27.086170   66615 cri.go:89] found id: ""
	I0429 20:09:27.086199   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.086211   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:27.086218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:27.086282   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:27.126502   66615 cri.go:89] found id: ""
	I0429 20:09:27.126531   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.126542   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:27.126560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:27.126635   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:27.175102   66615 cri.go:89] found id: ""
	I0429 20:09:27.175134   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.175142   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:27.175148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:27.175216   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:27.215983   66615 cri.go:89] found id: ""
	I0429 20:09:27.216013   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.216025   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:27.216033   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:27.216097   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:27.256427   66615 cri.go:89] found id: ""
	I0429 20:09:27.256456   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.256467   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:27.256474   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:27.256540   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:27.298444   66615 cri.go:89] found id: ""
	I0429 20:09:27.298479   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.298490   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:27.298501   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:27.298517   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:27.381579   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:27.381625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:27.429304   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:27.429350   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:27.483044   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:27.483082   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:27.500304   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:27.500332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:27.583909   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:26.555285   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:28.560544   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:30.049769   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:32.050537   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:29.256689   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:31.257554   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:30.084904   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:30.102417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:30.102486   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:30.146726   66615 cri.go:89] found id: ""
	I0429 20:09:30.146748   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.146755   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:30.146761   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:30.146809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:30.190739   66615 cri.go:89] found id: ""
	I0429 20:09:30.190768   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.190780   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:30.190788   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:30.190853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:30.228836   66615 cri.go:89] found id: ""
	I0429 20:09:30.228864   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.228879   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:30.228887   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:30.228951   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:30.270876   66615 cri.go:89] found id: ""
	I0429 20:09:30.270912   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.270920   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:30.270925   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:30.270995   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:30.310762   66615 cri.go:89] found id: ""
	I0429 20:09:30.310787   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.310795   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:30.310801   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:30.310850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:30.356339   66615 cri.go:89] found id: ""
	I0429 20:09:30.356363   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.356371   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:30.356376   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:30.356430   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:30.395540   66615 cri.go:89] found id: ""
	I0429 20:09:30.395575   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.395589   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:30.395598   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:30.395671   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:30.446237   66615 cri.go:89] found id: ""
	I0429 20:09:30.446263   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.446276   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:30.446286   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:30.446301   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:30.537309   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:30.537334   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:30.537349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:30.629116   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:30.629151   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:30.683308   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:30.683337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:30.735879   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:30.735910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.252322   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:33.268276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:33.268351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:33.309531   66615 cri.go:89] found id: ""
	I0429 20:09:33.309622   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.309641   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:33.309650   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:33.309719   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:33.367480   66615 cri.go:89] found id: ""
	I0429 20:09:33.367515   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.367527   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:33.367535   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:33.367595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:33.433717   66615 cri.go:89] found id: ""
	I0429 20:09:33.433742   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.433751   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:33.433756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:33.433820   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:33.484053   66615 cri.go:89] found id: ""
	I0429 20:09:33.484081   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.484093   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:33.484100   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:33.484165   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:33.524103   66615 cri.go:89] found id: ""
	I0429 20:09:33.524126   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.524136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:33.524143   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:33.524204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:33.565692   66615 cri.go:89] found id: ""
	I0429 20:09:33.565711   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.565719   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:33.565724   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:33.565784   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:33.607119   66615 cri.go:89] found id: ""
	I0429 20:09:33.607143   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.607153   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:33.607160   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:33.607225   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:33.648407   66615 cri.go:89] found id: ""
	I0429 20:09:33.648432   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.648440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:33.648449   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:33.648463   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:33.730744   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:33.730781   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:33.774295   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:33.774328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:33.829609   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:33.829653   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.846048   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:33.846092   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:33.924413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:31.056307   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:33.056538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:34.548872   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.550765   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:33.758571   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.257361   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.425072   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:36.440185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:36.440268   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:36.484364   66615 cri.go:89] found id: ""
	I0429 20:09:36.484386   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.484394   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:36.484400   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:36.484450   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:36.520436   66615 cri.go:89] found id: ""
	I0429 20:09:36.520466   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.520478   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:36.520487   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:36.520549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:36.563597   66615 cri.go:89] found id: ""
	I0429 20:09:36.563622   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.563630   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:36.563635   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:36.563704   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:36.613106   66615 cri.go:89] found id: ""
	I0429 20:09:36.613134   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.613143   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:36.613148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:36.613204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:36.658127   66615 cri.go:89] found id: ""
	I0429 20:09:36.658151   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.658159   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:36.658166   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:36.658229   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:36.707388   66615 cri.go:89] found id: ""
	I0429 20:09:36.707415   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.707423   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:36.707430   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:36.707479   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:36.753363   66615 cri.go:89] found id: ""
	I0429 20:09:36.753394   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.753405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:36.753413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:36.753475   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:36.801492   66615 cri.go:89] found id: ""
	I0429 20:09:36.801513   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.801521   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:36.801530   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:36.801542   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:36.857055   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:36.857108   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:36.874567   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:36.874595   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:36.956176   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:36.956202   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:36.956217   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:37.039958   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:37.039997   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:39.591442   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:39.607842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:39.607927   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:39.651917   66615 cri.go:89] found id: ""
	I0429 20:09:39.651941   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.651948   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:39.651955   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:39.652020   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:39.690032   66615 cri.go:89] found id: ""
	I0429 20:09:39.690059   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.690078   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:39.690086   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:39.690152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:39.733176   66615 cri.go:89] found id: ""
	I0429 20:09:39.733200   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.733209   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:39.733215   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:39.733261   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:39.779528   66615 cri.go:89] found id: ""
	I0429 20:09:39.779560   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.779572   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:39.779581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:39.779650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:39.822408   66615 cri.go:89] found id: ""
	I0429 20:09:39.822436   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.822445   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:39.822452   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:39.822522   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:39.864895   66615 cri.go:89] found id: ""
	I0429 20:09:39.864922   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.864930   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:39.864938   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:39.865008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:39.907498   66615 cri.go:89] found id: ""
	I0429 20:09:39.907523   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.907533   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:39.907539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:39.907606   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:39.948400   66615 cri.go:89] found id: ""
	I0429 20:09:39.948430   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.948440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:39.948449   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:39.948465   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:35.557262   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:38.056877   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:40.058568   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:39.049938   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:41.050139   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:38.756883   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:41.256775   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:39.964733   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:39.964763   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:40.043568   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:40.043593   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:40.043609   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:40.130776   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:40.130815   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:40.182011   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:40.182042   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:42.739068   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:42.756144   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:42.756286   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:42.798776   66615 cri.go:89] found id: ""
	I0429 20:09:42.798801   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.798810   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:42.798815   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:42.798861   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:42.837122   66615 cri.go:89] found id: ""
	I0429 20:09:42.837146   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.837154   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:42.837159   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:42.837205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:42.875435   66615 cri.go:89] found id: ""
	I0429 20:09:42.875461   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.875471   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:42.875479   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:42.875536   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:42.920044   66615 cri.go:89] found id: ""
	I0429 20:09:42.920076   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.920087   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:42.920094   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:42.920175   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:42.960122   66615 cri.go:89] found id: ""
	I0429 20:09:42.960152   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.960163   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:42.960169   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:42.960215   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:42.999784   66615 cri.go:89] found id: ""
	I0429 20:09:42.999811   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.999829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:42.999837   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:42.999917   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:43.040882   66615 cri.go:89] found id: ""
	I0429 20:09:43.040930   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.040952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:43.040959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:43.041044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:43.082596   66615 cri.go:89] found id: ""
	I0429 20:09:43.082627   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.082639   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:43.082650   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:43.082672   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:43.140302   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:43.140343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:43.157508   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:43.157547   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:43.241025   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:43.241047   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:43.241061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:43.325820   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:43.325855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:42.058727   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:44.556415   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:43.051020   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.550017   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:43.258400   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.756441   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:47.757029   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.871561   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:45.887323   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:45.887398   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:45.930021   66615 cri.go:89] found id: ""
	I0429 20:09:45.930050   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.930062   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:45.930088   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:45.930148   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:45.971404   66615 cri.go:89] found id: ""
	I0429 20:09:45.971434   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.971445   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:45.971452   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:45.971513   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:46.018801   66615 cri.go:89] found id: ""
	I0429 20:09:46.018825   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.018833   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:46.018838   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:46.018886   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:46.065118   66615 cri.go:89] found id: ""
	I0429 20:09:46.065140   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.065148   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:46.065153   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:46.065201   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:46.105244   66615 cri.go:89] found id: ""
	I0429 20:09:46.105271   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.105294   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:46.105309   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:46.105373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:46.153736   66615 cri.go:89] found id: ""
	I0429 20:09:46.153759   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.153768   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:46.153773   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:46.153836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:46.198940   66615 cri.go:89] found id: ""
	I0429 20:09:46.198965   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.198973   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:46.198979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:46.199064   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:46.238001   66615 cri.go:89] found id: ""
	I0429 20:09:46.238031   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.238044   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:46.238056   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:46.238087   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:46.292309   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:46.292357   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:46.307243   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:46.307274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:46.386832   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.386852   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:46.386869   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:46.468856   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:46.468891   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.017354   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:49.032753   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:49.032832   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:49.075345   66615 cri.go:89] found id: ""
	I0429 20:09:49.075375   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.075388   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:49.075394   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:49.075447   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:49.115294   66615 cri.go:89] found id: ""
	I0429 20:09:49.115328   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.115339   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:49.115347   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:49.115412   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:49.164115   66615 cri.go:89] found id: ""
	I0429 20:09:49.164140   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.164148   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:49.164154   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:49.164210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:49.207643   66615 cri.go:89] found id: ""
	I0429 20:09:49.207668   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.207679   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:49.207698   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:49.207762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:49.247121   66615 cri.go:89] found id: ""
	I0429 20:09:49.247147   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.247156   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:49.247162   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:49.247220   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:49.288594   66615 cri.go:89] found id: ""
	I0429 20:09:49.288626   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.288636   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:49.288643   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:49.288711   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:49.330243   66615 cri.go:89] found id: ""
	I0429 20:09:49.330273   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.330290   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:49.330300   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:49.330365   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:49.371304   66615 cri.go:89] found id: ""
	I0429 20:09:49.371348   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.371360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:49.371372   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:49.371392   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:49.450910   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:49.450949   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.494940   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:49.494970   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:49.553320   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:49.553364   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:49.568850   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:49.568878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:49.644932   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.559246   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:49.056790   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:48.050285   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:50.050579   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.549882   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:49.757113   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.258680   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.145702   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:52.162681   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:52.162756   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:52.204816   66615 cri.go:89] found id: ""
	I0429 20:09:52.204858   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.204870   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:52.204888   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:52.204963   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:52.248481   66615 cri.go:89] found id: ""
	I0429 20:09:52.248510   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.248519   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:52.248525   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:52.248596   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:52.289158   66615 cri.go:89] found id: ""
	I0429 20:09:52.289186   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.289194   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:52.289200   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:52.289260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:52.329905   66615 cri.go:89] found id: ""
	I0429 20:09:52.329931   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.329942   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:52.329950   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:52.330025   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:52.372523   66615 cri.go:89] found id: ""
	I0429 20:09:52.372546   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.372554   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:52.372560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:52.372623   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:52.414936   66615 cri.go:89] found id: ""
	I0429 20:09:52.414970   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.414982   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:52.414989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:52.415056   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:52.454139   66615 cri.go:89] found id: ""
	I0429 20:09:52.454164   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.454172   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:52.454178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:52.454236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:52.494093   66615 cri.go:89] found id: ""
	I0429 20:09:52.494129   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.494142   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:52.494155   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:52.494195   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:52.552104   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:52.552142   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:52.568430   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:52.568459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:52.649708   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:52.649736   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:52.649752   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:52.746231   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:52.746272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:51.057536   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:53.556862   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:55.049835   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:57.050606   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:54.759308   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:57.256396   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:55.296228   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:55.311257   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:55.311328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:55.352071   66615 cri.go:89] found id: ""
	I0429 20:09:55.352098   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.352109   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:55.352116   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:55.352177   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:55.399806   66615 cri.go:89] found id: ""
	I0429 20:09:55.399837   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.399847   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:55.399860   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:55.399947   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:55.444372   66615 cri.go:89] found id: ""
	I0429 20:09:55.444398   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.444406   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:55.444411   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:55.444468   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:55.485542   66615 cri.go:89] found id: ""
	I0429 20:09:55.485568   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.485579   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:55.485586   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:55.485670   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:55.535452   66615 cri.go:89] found id: ""
	I0429 20:09:55.535483   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.535494   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:55.535502   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:55.535566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:55.578009   66615 cri.go:89] found id: ""
	I0429 20:09:55.578036   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.578048   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:55.578056   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:55.578138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:55.618302   66615 cri.go:89] found id: ""
	I0429 20:09:55.618336   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.618347   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:55.618355   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:55.618419   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:55.660489   66615 cri.go:89] found id: ""
	I0429 20:09:55.660518   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.660526   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:55.660535   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:55.660548   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:55.713953   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:55.713993   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:55.729624   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:55.729656   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:55.813718   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:55.813746   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:55.813762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:55.898805   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:55.898849   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:58.467014   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:58.482852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:58.482925   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:58.522862   66615 cri.go:89] found id: ""
	I0429 20:09:58.522896   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.522908   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:58.522916   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:58.523000   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:58.568234   66615 cri.go:89] found id: ""
	I0429 20:09:58.568259   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.568266   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:58.568272   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:58.568327   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:58.609147   66615 cri.go:89] found id: ""
	I0429 20:09:58.609175   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.609185   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:58.609192   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:58.609265   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:58.657074   66615 cri.go:89] found id: ""
	I0429 20:09:58.657104   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.657115   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:58.657122   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:58.657186   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:58.706819   66615 cri.go:89] found id: ""
	I0429 20:09:58.706846   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.706857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:58.706865   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:58.706929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:58.754967   66615 cri.go:89] found id: ""
	I0429 20:09:58.754998   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.755007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:58.755018   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:58.755078   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:58.793657   66615 cri.go:89] found id: ""
	I0429 20:09:58.793694   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.793704   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:58.793709   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:58.793766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:58.832023   66615 cri.go:89] found id: ""
	I0429 20:09:58.832055   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.832066   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:58.832078   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:58.832094   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:58.886568   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:58.886605   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:58.902126   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:58.902154   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:58.986786   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:58.986814   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:58.986831   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:59.072258   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:59.072296   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:55.557245   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:58.056570   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:59.549825   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:02.050651   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:59.756493   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:01.756935   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:01.620172   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:01.636958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:01.637055   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:01.703865   66615 cri.go:89] found id: ""
	I0429 20:10:01.703890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.703899   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:01.703905   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:01.703950   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:01.742655   66615 cri.go:89] found id: ""
	I0429 20:10:01.742684   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.742692   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:01.742707   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:01.742778   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:01.782866   66615 cri.go:89] found id: ""
	I0429 20:10:01.782890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.782901   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:01.782908   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:01.782964   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:01.822958   66615 cri.go:89] found id: ""
	I0429 20:10:01.822984   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.822992   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:01.822997   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:01.823044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:01.868581   66615 cri.go:89] found id: ""
	I0429 20:10:01.868604   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.868612   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:01.868622   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:01.868675   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:01.908216   66615 cri.go:89] found id: ""
	I0429 20:10:01.908241   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.908249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:01.908255   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:01.908328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:01.953100   66615 cri.go:89] found id: ""
	I0429 20:10:01.953131   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.953142   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:01.953150   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:01.953213   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:01.999940   66615 cri.go:89] found id: ""
	I0429 20:10:01.999974   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.999988   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:01.999999   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:02.000012   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:02.061669   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:02.061704   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:02.077609   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:02.077640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:02.169643   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:02.169666   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:02.169679   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:02.250615   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:02.250657   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:04.803629   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:04.819286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:04.819364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:04.860501   66615 cri.go:89] found id: ""
	I0429 20:10:04.860530   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.860541   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:04.860548   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:04.860672   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:04.898444   66615 cri.go:89] found id: ""
	I0429 20:10:04.898472   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.898480   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:04.898486   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:04.898546   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:04.936569   66615 cri.go:89] found id: ""
	I0429 20:10:04.936599   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.936609   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:04.936617   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:04.936695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:00.556325   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:02.557754   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:05.058245   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:04.551711   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:07.050327   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:03.757096   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:06.257529   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:04.979667   66615 cri.go:89] found id: ""
	I0429 20:10:04.979696   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.979708   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:04.979715   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:04.979768   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:05.019608   66615 cri.go:89] found id: ""
	I0429 20:10:05.019638   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.019650   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:05.019658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:05.019724   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:05.063723   66615 cri.go:89] found id: ""
	I0429 20:10:05.063749   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.063758   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:05.063765   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:05.063821   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:05.106676   66615 cri.go:89] found id: ""
	I0429 20:10:05.106704   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.106714   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:05.106721   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:05.106783   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:05.147652   66615 cri.go:89] found id: ""
	I0429 20:10:05.147683   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.147693   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:05.147704   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:05.147721   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:05.189048   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:05.189085   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:05.248635   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:05.248669   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:05.265791   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:05.265826   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:05.343190   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:05.343217   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:05.343234   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:07.926868   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:07.942581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:07.942656   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:07.981316   66615 cri.go:89] found id: ""
	I0429 20:10:07.981349   66615 logs.go:276] 0 containers: []
	W0429 20:10:07.981361   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:07.981368   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:07.981429   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:08.024017   66615 cri.go:89] found id: ""
	I0429 20:10:08.024045   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.024056   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:08.024062   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:08.024146   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:08.075761   66615 cri.go:89] found id: ""
	I0429 20:10:08.075786   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.075798   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:08.075805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:08.075864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:08.146501   66615 cri.go:89] found id: ""
	I0429 20:10:08.146528   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.146536   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:08.146541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:08.146624   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:08.204987   66615 cri.go:89] found id: ""
	I0429 20:10:08.205013   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.205021   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:08.205027   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:08.205083   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:08.244930   66615 cri.go:89] found id: ""
	I0429 20:10:08.244959   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.244970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:08.244979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:08.245040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:08.284204   66615 cri.go:89] found id: ""
	I0429 20:10:08.284232   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.284243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:08.284250   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:08.284305   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:08.324077   66615 cri.go:89] found id: ""
	I0429 20:10:08.324102   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.324113   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:08.324123   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:08.324139   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:08.341584   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:08.341614   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:08.429808   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:08.429827   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:08.429840   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:08.509906   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:08.509942   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:08.562662   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:08.562697   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:07.557462   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:10.055718   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:09.553108   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:12.050533   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:12.543954   66218 pod_ready.go:81] duration metric: took 4m0.001047967s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" ...
	E0429 20:10:12.543994   66218 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 20:10:12.544032   66218 pod_ready.go:38] duration metric: took 4m6.615064199s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:10:12.544058   66218 kubeadm.go:591] duration metric: took 4m18.60301174s to restartPrimaryControlPlane
	W0429 20:10:12.544116   66218 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:12.544146   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:08.757127   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:10.760764   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:11.121673   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:11.137328   66615 kubeadm.go:591] duration metric: took 4m4.72832668s to restartPrimaryControlPlane
	W0429 20:10:11.137411   66615 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:11.137446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:13.254357   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.116867978s)
	I0429 20:10:13.254436   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:13.275293   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:13.287073   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:13.298046   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:13.298080   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:13.298132   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:13.311790   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:13.311861   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:13.323201   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:13.334284   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:13.334357   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:13.348597   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.361993   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:13.362055   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.376185   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:13.389715   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:13.389778   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:13.403955   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:13.675887   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:10:12.056403   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:14.059895   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:13.257345   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:15.257388   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:17.259138   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:16.557200   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:18.559617   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:19.756708   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:21.757655   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:21.056581   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:23.057477   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:24.256386   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:26.757303   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:25.556902   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:28.055172   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:30.056549   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:29.256790   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:31.757538   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:32.560174   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:35.056286   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:33.758717   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:36.257274   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:37.056603   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:39.557292   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:38.757913   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:40.758857   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:42.056927   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:44.557003   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:44.557038   66875 pod_ready.go:81] duration metric: took 4m0.008018273s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	E0429 20:10:44.557050   66875 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0429 20:10:44.557062   66875 pod_ready.go:38] duration metric: took 4m2.911025288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:10:44.557085   66875 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:10:44.557123   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:44.557191   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:44.620871   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:44.620900   66875 cri.go:89] found id: ""
	I0429 20:10:44.620910   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:44.620970   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.626852   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:44.626919   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:44.673726   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:44.673753   66875 cri.go:89] found id: ""
	I0429 20:10:44.673762   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:44.673827   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.680083   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:44.680157   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:44.724866   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:44.724899   66875 cri.go:89] found id: ""
	I0429 20:10:44.724909   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:44.724976   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.730438   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:44.730492   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:44.785159   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:44.785178   66875 cri.go:89] found id: ""
	I0429 20:10:44.785185   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:44.785230   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.790370   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:44.790432   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:44.839200   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:44.839219   66875 cri.go:89] found id: ""
	I0429 20:10:44.839226   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:44.839277   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.845411   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:44.845490   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:44.907184   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:44.907210   66875 cri.go:89] found id: ""
	I0429 20:10:44.907224   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:44.907281   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.914531   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:44.914596   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:44.957389   66875 cri.go:89] found id: ""
	I0429 20:10:44.957422   66875 logs.go:276] 0 containers: []
	W0429 20:10:44.957430   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:44.957436   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:44.957493   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:45.001760   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:45.001783   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:45.001789   66875 cri.go:89] found id: ""
	I0429 20:10:45.001796   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:45.001845   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:45.007293   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:45.012864   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:45.012886   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:45.406875   66218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.862702626s)
	I0429 20:10:45.406957   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:45.424927   66218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:45.436628   66218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:45.447896   66218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:45.447921   66218 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:45.447970   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:45.458604   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:45.458662   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:45.469701   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:45.479738   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:45.479796   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:45.490097   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:45.500840   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:45.500903   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:45.512918   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:45.524679   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:45.524756   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:45.536044   66218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:45.598481   66218 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:10:45.598556   66218 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:10:45.783162   66218 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:10:45.783321   66218 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:10:45.783481   66218 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:10:46.079842   66218 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:10:46.081981   66218 out.go:204]   - Generating certificates and keys ...
	I0429 20:10:46.082084   66218 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:10:46.082174   66218 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:10:46.082295   66218 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:10:46.082382   66218 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:10:46.082485   66218 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:10:46.082578   66218 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:10:46.082694   66218 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:10:46.082793   66218 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:10:46.082906   66218 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:10:46.082976   66218 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:10:46.083009   66218 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:10:46.083070   66218 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:10:46.242368   66218 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:10:46.667998   66218 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:10:46.832801   66218 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:10:47.033146   66218 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:10:47.265305   66218 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:10:47.266631   66218 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:10:47.271057   66218 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:10:47.273021   66218 out.go:204]   - Booting up control plane ...
	I0429 20:10:47.273128   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:10:47.273245   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:10:47.273333   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:10:47.293530   66218 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:10:47.294487   66218 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:10:47.294564   66218 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:10:47.435669   66218 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:10:47.435802   66218 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:10:43.256983   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:45.257106   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:47.757018   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:45.564197   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:45.564231   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:45.635133   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:45.635168   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:45.779957   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:45.779992   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:45.827796   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:45.827828   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:45.870603   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:45.870636   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:45.935181   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:45.935220   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:46.007476   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:46.007518   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:46.071132   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:46.071169   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:46.130185   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:46.130218   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:46.148649   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:46.148684   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:46.196227   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:46.196266   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:46.245663   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:46.245707   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:48.789522   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:48.810752   66875 api_server.go:72] duration metric: took 4m14.399329979s to wait for apiserver process to appear ...
	I0429 20:10:48.810785   66875 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:10:48.810826   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:48.810921   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:48.868391   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:48.868415   66875 cri.go:89] found id: ""
	I0429 20:10:48.868424   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:48.868490   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.874253   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:48.874329   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:48.934057   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:48.934103   66875 cri.go:89] found id: ""
	I0429 20:10:48.934113   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:48.934173   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.940161   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:48.940244   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:48.992205   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:48.992227   66875 cri.go:89] found id: ""
	I0429 20:10:48.992234   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:48.992297   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.997496   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:48.997568   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:49.038579   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:49.038612   66875 cri.go:89] found id: ""
	I0429 20:10:49.038622   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:49.038683   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.045062   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:49.045129   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:49.084533   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:49.084561   66875 cri.go:89] found id: ""
	I0429 20:10:49.084570   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:49.084628   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.089601   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:49.089680   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:49.133281   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:49.133315   66875 cri.go:89] found id: ""
	I0429 20:10:49.133324   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:49.133387   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.140784   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:49.140889   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:49.201071   66875 cri.go:89] found id: ""
	I0429 20:10:49.201102   66875 logs.go:276] 0 containers: []
	W0429 20:10:49.201112   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:49.201117   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:49.201182   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:49.248708   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:49.248732   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:49.248738   66875 cri.go:89] found id: ""
	I0429 20:10:49.248747   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:49.248807   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.254131   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.259257   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:49.259287   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:49.325386   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:49.325417   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:49.371335   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:49.371365   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:49.414056   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:49.414112   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:49.469457   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:49.469493   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:49.523091   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:49.523123   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:49.581937   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:49.581977   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:49.599704   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:49.599738   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:49.738943   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:49.738984   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:49.814482   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:49.814521   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:50.306035   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:50.306084   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:50.371400   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:50.371485   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:50.426578   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:50.426613   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:48.438095   66218 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002489157s
	I0429 20:10:48.438230   66218 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:10:49.758262   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:52.256578   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:53.941848   66218 kubeadm.go:309] [api-check] The API server is healthy after 5.503491397s
	I0429 20:10:53.961404   66218 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:10:53.979792   66218 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:10:54.018524   66218 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:10:54.018776   66218 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-456788 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:10:54.037050   66218 kubeadm.go:309] [bootstrap-token] Using token: 793n05.pmfi0tdyn7q4x0lt
	I0429 20:10:54.038421   66218 out.go:204]   - Configuring RBAC rules ...
	I0429 20:10:54.038551   66218 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:10:54.045190   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:10:54.054625   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:10:54.060216   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:10:54.068878   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:10:54.073537   66218 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:10:54.355285   66218 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:10:54.800956   66218 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:10:55.352995   66218 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:10:55.353026   66218 kubeadm.go:309] 
	I0429 20:10:55.353135   66218 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:10:55.353158   66218 kubeadm.go:309] 
	I0429 20:10:55.353245   66218 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:10:55.353254   66218 kubeadm.go:309] 
	I0429 20:10:55.353290   66218 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:10:55.353382   66218 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:10:55.353456   66218 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:10:55.353467   66218 kubeadm.go:309] 
	I0429 20:10:55.353564   66218 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:10:55.353578   66218 kubeadm.go:309] 
	I0429 20:10:55.353637   66218 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:10:55.353648   66218 kubeadm.go:309] 
	I0429 20:10:55.353735   66218 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:10:55.353937   66218 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:10:55.354052   66218 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:10:55.354095   66218 kubeadm.go:309] 
	I0429 20:10:55.354216   66218 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:10:55.354334   66218 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:10:55.354348   66218 kubeadm.go:309] 
	I0429 20:10:55.354464   66218 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 793n05.pmfi0tdyn7q4x0lt \
	I0429 20:10:55.354615   66218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 20:10:55.354643   66218 kubeadm.go:309] 	--control-plane 
	I0429 20:10:55.354667   66218 kubeadm.go:309] 
	I0429 20:10:55.354799   66218 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:10:55.354810   66218 kubeadm.go:309] 
	I0429 20:10:55.354943   66218 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 793n05.pmfi0tdyn7q4x0lt \
	I0429 20:10:55.355111   66218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 20:10:55.355493   66218 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:10:55.355513   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:10:55.355520   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:10:55.357341   66218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:10:52.999575   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:10:53.005598   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 200:
	ok
	I0429 20:10:53.006923   66875 api_server.go:141] control plane version: v1.30.0
	I0429 20:10:53.006951   66875 api_server.go:131] duration metric: took 4.196158371s to wait for apiserver health ...
	I0429 20:10:53.006978   66875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:10:53.007011   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:53.007073   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:53.064156   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:53.064186   66875 cri.go:89] found id: ""
	I0429 20:10:53.064196   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:53.064256   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.069282   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:53.069361   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:53.128981   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:53.129016   66875 cri.go:89] found id: ""
	I0429 20:10:53.129025   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:53.129086   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.134680   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:53.134779   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:53.188828   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:53.188857   66875 cri.go:89] found id: ""
	I0429 20:10:53.188869   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:53.188922   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.195332   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:53.195401   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:53.245528   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:53.245548   66875 cri.go:89] found id: ""
	I0429 20:10:53.245556   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:53.245617   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.251849   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:53.251925   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:53.302914   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:53.302941   66875 cri.go:89] found id: ""
	I0429 20:10:53.302950   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:53.303004   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.308072   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:53.308138   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:53.358655   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:53.358684   66875 cri.go:89] found id: ""
	I0429 20:10:53.358693   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:53.358753   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.363796   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:53.363875   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:53.413543   66875 cri.go:89] found id: ""
	I0429 20:10:53.413573   66875 logs.go:276] 0 containers: []
	W0429 20:10:53.413586   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:53.413593   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:53.413651   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:53.457365   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:53.457393   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:53.457399   66875 cri.go:89] found id: ""
	I0429 20:10:53.457409   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:53.457473   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.464321   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.469358   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:53.469377   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:53.605546   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:53.605594   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:53.682788   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:53.682837   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:53.725985   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:53.726017   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:53.775864   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:53.775890   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:53.834762   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:53.834801   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:53.853796   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:53.853830   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:53.915651   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:53.915680   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:53.968857   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:53.968885   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:54.024061   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:54.024090   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:54.079637   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:54.079674   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:54.129296   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:54.129325   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:54.499803   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:54.499861   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:57.070245   66875 system_pods.go:59] 8 kube-system pods found
	I0429 20:10:57.070288   66875 system_pods.go:61] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running
	I0429 20:10:57.070296   66875 system_pods.go:61] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running
	I0429 20:10:57.070302   66875 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running
	I0429 20:10:57.070308   66875 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running
	I0429 20:10:57.070313   66875 system_pods.go:61] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running
	I0429 20:10:57.070318   66875 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running
	I0429 20:10:57.070329   66875 system_pods.go:61] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:10:57.070339   66875 system_pods.go:61] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running
	I0429 20:10:57.070353   66875 system_pods.go:74] duration metric: took 4.063366088s to wait for pod list to return data ...
	I0429 20:10:57.070366   66875 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:10:57.077008   66875 default_sa.go:45] found service account: "default"
	I0429 20:10:57.077031   66875 default_sa.go:55] duration metric: took 6.655489ms for default service account to be created ...
	I0429 20:10:57.077040   66875 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:10:57.087665   66875 system_pods.go:86] 8 kube-system pods found
	I0429 20:10:57.087695   66875 system_pods.go:89] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running
	I0429 20:10:57.087701   66875 system_pods.go:89] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running
	I0429 20:10:57.087707   66875 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running
	I0429 20:10:57.087711   66875 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running
	I0429 20:10:57.087715   66875 system_pods.go:89] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running
	I0429 20:10:57.087719   66875 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running
	I0429 20:10:57.087726   66875 system_pods.go:89] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:10:57.087730   66875 system_pods.go:89] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running
	I0429 20:10:57.087740   66875 system_pods.go:126] duration metric: took 10.694398ms to wait for k8s-apps to be running ...
	I0429 20:10:57.087749   66875 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:10:57.087794   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:57.106878   66875 system_svc.go:56] duration metric: took 19.118595ms WaitForService to wait for kubelet
	I0429 20:10:57.106917   66875 kubeadm.go:576] duration metric: took 4m22.695498557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:10:57.106945   66875 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:10:57.111052   66875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:10:57.111082   66875 node_conditions.go:123] node cpu capacity is 2
	I0429 20:10:57.111096   66875 node_conditions.go:105] duration metric: took 4.144283ms to run NodePressure ...
	I0429 20:10:57.111112   66875 start.go:240] waiting for startup goroutines ...
	I0429 20:10:57.111122   66875 start.go:245] waiting for cluster config update ...
	I0429 20:10:57.111141   66875 start.go:254] writing updated cluster config ...
	I0429 20:10:57.111536   66875 ssh_runner.go:195] Run: rm -f paused
	I0429 20:10:57.169536   66875 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:10:57.172347   66875 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-866143" cluster and "default" namespace by default
	I0429 20:10:55.358683   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:10:55.371397   66218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:10:55.397119   66218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:10:55.397192   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:55.397192   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-456788 minikube.k8s.io/updated_at=2024_04_29T20_10_55_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=no-preload-456788 minikube.k8s.io/primary=true
	I0429 20:10:55.605222   66218 ops.go:34] apiserver oom_adj: -16
	I0429 20:10:55.605588   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:56.106450   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:56.605894   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:57.105657   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:57.605823   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:54.258101   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:56.258336   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:58.106263   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:58.605675   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:59.106483   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:59.605671   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:00.105670   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:00.605695   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:01.106482   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:01.606206   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:02.106534   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:02.606372   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:58.756416   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:00.756875   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:02.756955   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:03.106555   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:03.606298   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.106227   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.606531   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:05.105708   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:05.605735   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:06.106556   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:06.606380   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:07.105690   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:07.605718   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.749964   65980 pod_ready.go:81] duration metric: took 4m0.000195525s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" ...
	E0429 20:11:04.749999   65980 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 20:11:04.750024   65980 pod_ready.go:38] duration metric: took 4m6.211964949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:04.750053   65980 kubeadm.go:591] duration metric: took 4m17.268163648s to restartPrimaryControlPlane
	W0429 20:11:04.750123   65980 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:11:04.750156   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:11:08.106383   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:08.606498   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:08.726533   66218 kubeadm.go:1107] duration metric: took 13.329402445s to wait for elevateKubeSystemPrivileges
	W0429 20:11:08.726584   66218 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:11:08.726596   66218 kubeadm.go:393] duration metric: took 5m14.838913251s to StartCluster
	I0429 20:11:08.726617   66218 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:08.726706   66218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:11:08.729364   66218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:08.730202   66218 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:11:08.731600   66218 out.go:177] * Verifying Kubernetes components...
	I0429 20:11:08.730245   66218 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:11:08.730446   66218 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:11:08.733479   66218 addons.go:69] Setting storage-provisioner=true in profile "no-preload-456788"
	I0429 20:11:08.733509   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:11:08.733518   66218 addons.go:69] Setting default-storageclass=true in profile "no-preload-456788"
	I0429 20:11:08.733540   66218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-456788"
	I0429 20:11:08.733514   66218 addons.go:234] Setting addon storage-provisioner=true in "no-preload-456788"
	W0429 20:11:08.733641   66218 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:11:08.733674   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.733963   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.733988   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.734081   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.734079   66218 addons.go:69] Setting metrics-server=true in profile "no-preload-456788"
	I0429 20:11:08.734106   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.734117   66218 addons.go:234] Setting addon metrics-server=true in "no-preload-456788"
	W0429 20:11:08.734126   66218 addons.go:243] addon metrics-server should already be in state true
	I0429 20:11:08.734154   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.734503   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.734536   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.754451   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0429 20:11:08.754650   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0429 20:11:08.754827   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I0429 20:11:08.755114   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755237   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755332   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755884   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.755905   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756031   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.756048   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.756050   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756062   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756456   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756477   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756513   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756853   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.757231   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.757254   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.757256   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.757291   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.761534   66218 addons.go:234] Setting addon default-storageclass=true in "no-preload-456788"
	W0429 20:11:08.761551   66218 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:11:08.761574   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.761857   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.761894   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.776659   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0429 20:11:08.776838   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0429 20:11:08.777067   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.777462   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.777643   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.777657   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.778152   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.778162   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.778170   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.778371   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.778845   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.778901   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0429 20:11:08.779220   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.779415   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.779446   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.779621   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.779634   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.780051   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.780246   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.780506   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.782432   66218 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:11:08.783809   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:11:08.783825   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:11:08.783843   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.782370   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.786004   66218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:11:08.787488   66218 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:08.787506   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:11:08.787663   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.788245   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.788290   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.788308   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.788381   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.788632   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.788834   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.788985   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:08.791587   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.791964   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.792052   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.792293   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.792477   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.792614   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.792712   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:08.798944   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I0429 20:11:08.799562   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.800224   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.800243   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.800790   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.801008   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.803220   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.803519   66218 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:08.803534   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:11:08.803552   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.806797   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.807216   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.807244   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.807540   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.807986   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.808170   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.808313   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:09.006753   66218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:11:09.038156   66218 node_ready.go:35] waiting up to 6m0s for node "no-preload-456788" to be "Ready" ...
	I0429 20:11:09.051516   66218 node_ready.go:49] node "no-preload-456788" has status "Ready":"True"
	I0429 20:11:09.051545   66218 node_ready.go:38] duration metric: took 13.34705ms for node "no-preload-456788" to be "Ready" ...
	I0429 20:11:09.051557   66218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:09.064032   66218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:09.308339   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:09.308749   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:11:09.308773   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:11:09.309961   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:09.347829   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:11:09.347860   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:11:09.466683   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:11:09.466718   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:11:09.678800   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:11:09.718867   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.718899   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.719248   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.719276   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.719273   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:09.719288   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.719296   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.719553   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.719574   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.719581   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:09.726177   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.726204   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.726527   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.726544   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.726590   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:10.570942   66218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.260944092s)
	I0429 20:11:10.571001   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.571012   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.571480   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.571504   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.571520   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.571528   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.571792   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:10.571818   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.571833   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.912211   66218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.233359134s)
	I0429 20:11:10.912282   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.912298   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.912746   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.912769   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.912779   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.912787   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.913055   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.913108   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.913132   66218 addons.go:470] Verifying addon metrics-server=true in "no-preload-456788"
	I0429 20:11:10.916694   66218 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0429 20:11:10.918273   66218 addons.go:505] duration metric: took 2.188028967s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0429 20:11:11.108067   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.108091   66218 pod_ready.go:81] duration metric: took 2.044032617s for pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.108103   66218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.115163   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.115196   66218 pod_ready.go:81] duration metric: took 7.084503ms for pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.115210   66218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.129264   66218 pod_ready.go:92] pod "etcd-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.129286   66218 pod_ready.go:81] duration metric: took 14.068541ms for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.129297   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.148114   66218 pod_ready.go:92] pod "kube-apiserver-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.148142   66218 pod_ready.go:81] duration metric: took 18.837962ms for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.148155   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.157985   66218 pod_ready.go:92] pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.158006   66218 pod_ready.go:81] duration metric: took 9.844321ms for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.158016   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6m95d" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.469680   66218 pod_ready.go:92] pod "kube-proxy-6m95d" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.469701   66218 pod_ready.go:81] duration metric: took 311.678646ms for pod "kube-proxy-6m95d" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.469710   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.868513   66218 pod_ready.go:92] pod "kube-scheduler-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.868539   66218 pod_ready.go:81] duration metric: took 398.821528ms for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.868550   66218 pod_ready.go:38] duration metric: took 2.816983409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:11.868569   66218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:11:11.868632   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:11:11.885115   66218 api_server.go:72] duration metric: took 3.154873937s to wait for apiserver process to appear ...
	I0429 20:11:11.885146   66218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:11:11.885169   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:11:11.890715   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0429 20:11:11.891649   66218 api_server.go:141] control plane version: v1.30.0
	I0429 20:11:11.891671   66218 api_server.go:131] duration metric: took 6.518818ms to wait for apiserver health ...
	I0429 20:11:11.891679   66218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:11:12.072142   66218 system_pods.go:59] 9 kube-system pods found
	I0429 20:11:12.072175   66218 system_pods.go:61] "coredns-7db6d8ff4d-hcfbq" [c0b53824-478e-4523-ada4-1cd7ba306c81] Running
	I0429 20:11:12.072183   66218 system_pods.go:61] "coredns-7db6d8ff4d-pvhwv" [f38ee7b3-53fe-4609-9b2b-000f55de5d5c] Running
	I0429 20:11:12.072188   66218 system_pods.go:61] "etcd-no-preload-456788" [b0629d4c-643a-485d-aa85-33fe009fff50] Running
	I0429 20:11:12.072194   66218 system_pods.go:61] "kube-apiserver-no-preload-456788" [e56edf5c-9883-4cd9-abab-09902048f584] Running
	I0429 20:11:12.072200   66218 system_pods.go:61] "kube-controller-manager-no-preload-456788" [bfaf44f0-da19-4cec-bec9-d9917cb8a571] Running
	I0429 20:11:12.072205   66218 system_pods.go:61] "kube-proxy-6m95d" [25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7] Running
	I0429 20:11:12.072209   66218 system_pods.go:61] "kube-scheduler-no-preload-456788" [de4f90f7-05d6-4755-a4c0-2c522f7fe88c] Running
	I0429 20:11:12.072217   66218 system_pods.go:61] "metrics-server-569cc877fc-sxgwr" [046d28fe-d51e-43ba-9550-d1d7e33d9d84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:11:12.072224   66218 system_pods.go:61] "storage-provisioner" [fd1c4813-8889-4f21-b21e-6007eaa163a6] Running
	I0429 20:11:12.072247   66218 system_pods.go:74] duration metric: took 180.561509ms to wait for pod list to return data ...
	I0429 20:11:12.072256   66218 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:11:12.267637   66218 default_sa.go:45] found service account: "default"
	I0429 20:11:12.267663   66218 default_sa.go:55] duration metric: took 195.398841ms for default service account to be created ...
	I0429 20:11:12.267677   66218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:11:12.471933   66218 system_pods.go:86] 9 kube-system pods found
	I0429 20:11:12.471967   66218 system_pods.go:89] "coredns-7db6d8ff4d-hcfbq" [c0b53824-478e-4523-ada4-1cd7ba306c81] Running
	I0429 20:11:12.471975   66218 system_pods.go:89] "coredns-7db6d8ff4d-pvhwv" [f38ee7b3-53fe-4609-9b2b-000f55de5d5c] Running
	I0429 20:11:12.471981   66218 system_pods.go:89] "etcd-no-preload-456788" [b0629d4c-643a-485d-aa85-33fe009fff50] Running
	I0429 20:11:12.471987   66218 system_pods.go:89] "kube-apiserver-no-preload-456788" [e56edf5c-9883-4cd9-abab-09902048f584] Running
	I0429 20:11:12.471994   66218 system_pods.go:89] "kube-controller-manager-no-preload-456788" [bfaf44f0-da19-4cec-bec9-d9917cb8a571] Running
	I0429 20:11:12.471999   66218 system_pods.go:89] "kube-proxy-6m95d" [25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7] Running
	I0429 20:11:12.472008   66218 system_pods.go:89] "kube-scheduler-no-preload-456788" [de4f90f7-05d6-4755-a4c0-2c522f7fe88c] Running
	I0429 20:11:12.472020   66218 system_pods.go:89] "metrics-server-569cc877fc-sxgwr" [046d28fe-d51e-43ba-9550-d1d7e33d9d84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:11:12.472027   66218 system_pods.go:89] "storage-provisioner" [fd1c4813-8889-4f21-b21e-6007eaa163a6] Running
	I0429 20:11:12.472039   66218 system_pods.go:126] duration metric: took 204.355515ms to wait for k8s-apps to be running ...
	I0429 20:11:12.472052   66218 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:11:12.472110   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:11:12.487748   66218 system_svc.go:56] duration metric: took 15.68796ms WaitForService to wait for kubelet
	I0429 20:11:12.487779   66218 kubeadm.go:576] duration metric: took 3.757538662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:11:12.487804   66218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:11:12.668597   66218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:11:12.668619   66218 node_conditions.go:123] node cpu capacity is 2
	I0429 20:11:12.668629   66218 node_conditions.go:105] duration metric: took 180.819727ms to run NodePressure ...
	I0429 20:11:12.668640   66218 start.go:240] waiting for startup goroutines ...
	I0429 20:11:12.668646   66218 start.go:245] waiting for cluster config update ...
	I0429 20:11:12.668656   66218 start.go:254] writing updated cluster config ...
	I0429 20:11:12.668905   66218 ssh_runner.go:195] Run: rm -f paused
	I0429 20:11:12.718997   66218 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:11:12.720757   66218 out.go:177] * Done! kubectl is now configured to use "no-preload-456788" cluster and "default" namespace by default
	I0429 20:11:37.819019   65980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.068841912s)
	I0429 20:11:37.819092   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:11:37.836850   65980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:11:37.849684   65980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:11:37.861597   65980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:11:37.861626   65980 kubeadm.go:156] found existing configuration files:
	
	I0429 20:11:37.861674   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:11:37.872799   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:11:37.872860   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:11:37.884336   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:11:37.895124   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:11:37.895181   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:11:37.906874   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:11:37.917482   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:11:37.917530   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:11:37.928137   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:11:37.938698   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:11:37.938750   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:11:37.949658   65980 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:11:38.159358   65980 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:11:46.848042   65980 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:11:46.848108   65980 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:11:46.848169   65980 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:11:46.848308   65980 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:11:46.848447   65980 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:11:46.848531   65980 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:11:46.850368   65980 out.go:204]   - Generating certificates and keys ...
	I0429 20:11:46.850444   65980 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:11:46.850496   65980 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:11:46.850580   65980 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:11:46.850649   65980 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:11:46.850742   65980 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:11:46.850850   65980 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:11:46.850949   65980 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:11:46.851018   65980 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:11:46.851117   65980 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:11:46.851201   65980 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:11:46.851263   65980 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:11:46.851327   65980 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:11:46.851395   65980 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:11:46.851466   65980 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:11:46.851513   65980 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:11:46.851605   65980 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:11:46.851690   65980 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:11:46.851791   65980 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:11:46.851878   65980 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:11:46.853420   65980 out.go:204]   - Booting up control plane ...
	I0429 20:11:46.853526   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:11:46.853617   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:11:46.853696   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:11:46.853791   65980 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:11:46.853866   65980 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:11:46.853900   65980 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:11:46.854010   65980 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:11:46.854094   65980 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:11:46.854148   65980 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.976221ms
	I0429 20:11:46.854240   65980 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:11:46.854311   65980 kubeadm.go:309] [api-check] The API server is healthy after 5.50298765s
	I0429 20:11:46.854407   65980 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:11:46.854509   65980 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:11:46.854565   65980 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:11:46.854726   65980 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-161370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:11:46.854783   65980 kubeadm.go:309] [bootstrap-token] Using token: 93xwhj.zowa67wvl54p1iru
	I0429 20:11:46.856308   65980 out.go:204]   - Configuring RBAC rules ...
	I0429 20:11:46.856452   65980 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:11:46.856561   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:11:46.856736   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:11:46.856867   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:11:46.857018   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:11:46.857140   65980 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:11:46.857294   65980 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:11:46.857358   65980 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:11:46.857419   65980 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:11:46.857428   65980 kubeadm.go:309] 
	I0429 20:11:46.857502   65980 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:11:46.857514   65980 kubeadm.go:309] 
	I0429 20:11:46.857606   65980 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:11:46.857617   65980 kubeadm.go:309] 
	I0429 20:11:46.857649   65980 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:11:46.857725   65980 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:11:46.857797   65980 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:11:46.857806   65980 kubeadm.go:309] 
	I0429 20:11:46.857880   65980 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:11:46.857889   65980 kubeadm.go:309] 
	I0429 20:11:46.857947   65980 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:11:46.857955   65980 kubeadm.go:309] 
	I0429 20:11:46.858020   65980 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:11:46.858125   65980 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:11:46.858216   65980 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:11:46.858224   65980 kubeadm.go:309] 
	I0429 20:11:46.858325   65980 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:11:46.858433   65980 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:11:46.858442   65980 kubeadm.go:309] 
	I0429 20:11:46.858553   65980 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 93xwhj.zowa67wvl54p1iru \
	I0429 20:11:46.858696   65980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 20:11:46.858722   65980 kubeadm.go:309] 	--control-plane 
	I0429 20:11:46.858728   65980 kubeadm.go:309] 
	I0429 20:11:46.858797   65980 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:11:46.858803   65980 kubeadm.go:309] 
	I0429 20:11:46.858881   65980 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 93xwhj.zowa67wvl54p1iru \
	I0429 20:11:46.859014   65980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 20:11:46.859025   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:11:46.859034   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:11:46.861619   65980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:11:46.863111   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:11:46.875965   65980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:11:46.897147   65980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:11:46.897225   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:46.897238   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-161370 minikube.k8s.io/updated_at=2024_04_29T20_11_46_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=embed-certs-161370 minikube.k8s.io/primary=true
	I0429 20:11:46.927555   65980 ops.go:34] apiserver oom_adj: -16
	I0429 20:11:47.119594   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:47.620640   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:48.119974   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:48.620618   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:49.120107   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:49.620349   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:50.120180   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:50.620533   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:51.120332   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:51.620669   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:52.119922   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:52.620467   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:53.120486   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:53.620314   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:54.120159   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:54.620430   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:55.119995   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:55.620496   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:56.120152   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:56.620390   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:57.120090   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:57.619671   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:58.120549   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:58.620334   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.120532   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.619732   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.765502   65980 kubeadm.go:1107] duration metric: took 12.868344365s to wait for elevateKubeSystemPrivileges
	W0429 20:11:59.765550   65980 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:11:59.765561   65980 kubeadm.go:393] duration metric: took 5m12.339650014s to StartCluster
	I0429 20:11:59.765582   65980 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:59.765671   65980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:11:59.767924   65980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:59.768253   65980 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:11:59.769950   65980 out.go:177] * Verifying Kubernetes components...
	I0429 20:11:59.768323   65980 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:11:59.768433   65980 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:11:59.771281   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:11:59.771300   65980 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-161370"
	I0429 20:11:59.771313   65980 addons.go:69] Setting default-storageclass=true in profile "embed-certs-161370"
	I0429 20:11:59.771332   65980 addons.go:69] Setting metrics-server=true in profile "embed-certs-161370"
	I0429 20:11:59.771344   65980 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-161370"
	W0429 20:11:59.771355   65980 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:11:59.771361   65980 addons.go:234] Setting addon metrics-server=true in "embed-certs-161370"
	W0429 20:11:59.771370   65980 addons.go:243] addon metrics-server should already be in state true
	I0429 20:11:59.771399   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.771401   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.771354   65980 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-161370"
	I0429 20:11:59.771757   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771768   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771772   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771783   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.771786   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.771788   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.787359   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0429 20:11:59.787384   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0429 20:11:59.787503   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46153
	I0429 20:11:59.787764   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.787987   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.788069   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.788254   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788273   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.788708   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788724   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.788773   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.788832   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788852   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.789102   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.789117   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.789267   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.789478   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.789510   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.790170   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.790220   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.792108   65980 addons.go:234] Setting addon default-storageclass=true in "embed-certs-161370"
	W0429 20:11:59.792127   65980 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:11:59.792154   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.792386   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.792424   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.808581   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35659
	I0429 20:11:59.808924   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44943
	I0429 20:11:59.808943   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.809461   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.809481   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.809561   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.809791   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.810335   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.810357   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.810976   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.810992   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.811324   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.811604   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0429 20:11:59.811758   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.812141   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.812592   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.812610   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.813130   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.813351   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.813614   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.815589   65980 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:11:59.817004   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:11:59.817014   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:11:59.817027   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.815020   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.818585   65980 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:11:59.820110   65980 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:59.820125   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:11:59.820140   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.819840   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.820305   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.820333   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.820563   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.820722   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.820874   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.820998   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.822849   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.823299   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.823323   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.823460   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.823599   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.823924   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.824039   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.827552   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0429 20:11:59.827976   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.828369   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.828389   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.828754   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.828921   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.830295   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.830566   65980 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:59.830578   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:11:59.830590   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.833174   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.833526   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.833545   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.833759   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.833910   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.834029   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.834166   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.978978   65980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:11:59.995547   65980 node_ready.go:35] waiting up to 6m0s for node "embed-certs-161370" to be "Ready" ...
	I0429 20:12:00.003802   65980 node_ready.go:49] node "embed-certs-161370" has status "Ready":"True"
	I0429 20:12:00.003823   65980 node_ready.go:38] duration metric: took 8.245639ms for node "embed-certs-161370" to be "Ready" ...
	I0429 20:12:00.003833   65980 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:12:00.010487   65980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:00.072627   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:12:00.075716   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:12:00.177043   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:12:00.177069   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:12:00.278082   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:12:00.278112   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:12:00.311731   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:12:00.311756   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:12:00.369982   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:12:00.642840   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.642865   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643084   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.643109   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643227   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.643240   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.643248   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.643256   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643374   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.645085   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645103   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.645112   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.645121   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.645196   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645228   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.645231   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.645331   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645343   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.658929   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.658955   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.659236   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.659267   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.659281   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.103183   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:01.103207   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:01.103488   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:01.103542   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.103557   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:01.103541   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:01.103584   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:01.105440   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:01.105461   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.105473   65980 addons.go:470] Verifying addon metrics-server=true in "embed-certs-161370"
	I0429 20:12:01.107435   65980 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0429 20:12:01.109051   65980 addons.go:505] duration metric: took 1.340729876s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0429 20:12:02.029772   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace has status "Ready":"False"
	I0429 20:12:02.520396   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.520417   65980 pod_ready.go:81] duration metric: took 2.509903724s for pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.520426   65980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.529115   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.529141   65980 pod_ready.go:81] duration metric: took 8.707165ms for pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.529153   65980 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.539459   65980 pod_ready.go:92] pod "etcd-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.539478   65980 pod_ready.go:81] duration metric: took 10.318294ms for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.539489   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.543813   65980 pod_ready.go:92] pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.543830   65980 pod_ready.go:81] duration metric: took 4.333619ms for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.543839   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.549343   65980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.549363   65980 pod_ready.go:81] duration metric: took 5.516323ms for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.549374   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wq48j" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.915209   65980 pod_ready.go:92] pod "kube-proxy-wq48j" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.915232   65980 pod_ready.go:81] duration metric: took 365.851814ms for pod "kube-proxy-wq48j" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.915240   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:03.315564   65980 pod_ready.go:92] pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:03.315587   65980 pod_ready.go:81] duration metric: took 400.340876ms for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:03.315595   65980 pod_ready.go:38] duration metric: took 3.311752591s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:12:03.315609   65980 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:12:03.315655   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:12:03.333491   65980 api_server.go:72] duration metric: took 3.565200855s to wait for apiserver process to appear ...
	I0429 20:12:03.333521   65980 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:12:03.333538   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:12:03.338822   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0429 20:12:03.339975   65980 api_server.go:141] control plane version: v1.30.0
	I0429 20:12:03.339995   65980 api_server.go:131] duration metric: took 6.468233ms to wait for apiserver health ...
	I0429 20:12:03.340002   65980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:12:03.519016   65980 system_pods.go:59] 9 kube-system pods found
	I0429 20:12:03.519042   65980 system_pods.go:61] "coredns-7db6d8ff4d-7z6zv" [422451a2-615d-4bf8-8de8-d5fa5805219f] Running
	I0429 20:12:03.519047   65980 system_pods.go:61] "coredns-7db6d8ff4d-rr6bd" [6d14ff20-6dab-4c02-b91c-0a1e326f1593] Running
	I0429 20:12:03.519050   65980 system_pods.go:61] "etcd-embed-certs-161370" [ab19e79c-18bd-4d0d-b5cf-639453495383] Running
	I0429 20:12:03.519055   65980 system_pods.go:61] "kube-apiserver-embed-certs-161370" [6091dd0a-333d-4729-97db-eb7a30755db4] Running
	I0429 20:12:03.519059   65980 system_pods.go:61] "kube-controller-manager-embed-certs-161370" [de70d57c-9329-4d37-a838-9c9ae1e41871] Running
	I0429 20:12:03.519061   65980 system_pods.go:61] "kube-proxy-wq48j" [3b3b23ef-b5b4-4754-bc44-73e1d51a18d7] Running
	I0429 20:12:03.519065   65980 system_pods.go:61] "kube-scheduler-embed-certs-161370" [c7fd3d36-4e35-43b2-93e7-45129464937d] Running
	I0429 20:12:03.519071   65980 system_pods.go:61] "metrics-server-569cc877fc-x2wb6" [cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:12:03.519075   65980 system_pods.go:61] "storage-provisioner" [93e046a1-3867-44e1-8a4f-cf0eba6dfd6b] Running
	I0429 20:12:03.519082   65980 system_pods.go:74] duration metric: took 179.075384ms to wait for pod list to return data ...
	I0429 20:12:03.519089   65980 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:12:03.714354   65980 default_sa.go:45] found service account: "default"
	I0429 20:12:03.714384   65980 default_sa.go:55] duration metric: took 195.287433ms for default service account to be created ...
	I0429 20:12:03.714395   65980 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:12:03.918729   65980 system_pods.go:86] 9 kube-system pods found
	I0429 20:12:03.918755   65980 system_pods.go:89] "coredns-7db6d8ff4d-7z6zv" [422451a2-615d-4bf8-8de8-d5fa5805219f] Running
	I0429 20:12:03.918760   65980 system_pods.go:89] "coredns-7db6d8ff4d-rr6bd" [6d14ff20-6dab-4c02-b91c-0a1e326f1593] Running
	I0429 20:12:03.918765   65980 system_pods.go:89] "etcd-embed-certs-161370" [ab19e79c-18bd-4d0d-b5cf-639453495383] Running
	I0429 20:12:03.918769   65980 system_pods.go:89] "kube-apiserver-embed-certs-161370" [6091dd0a-333d-4729-97db-eb7a30755db4] Running
	I0429 20:12:03.918773   65980 system_pods.go:89] "kube-controller-manager-embed-certs-161370" [de70d57c-9329-4d37-a838-9c9ae1e41871] Running
	I0429 20:12:03.918777   65980 system_pods.go:89] "kube-proxy-wq48j" [3b3b23ef-b5b4-4754-bc44-73e1d51a18d7] Running
	I0429 20:12:03.918780   65980 system_pods.go:89] "kube-scheduler-embed-certs-161370" [c7fd3d36-4e35-43b2-93e7-45129464937d] Running
	I0429 20:12:03.918787   65980 system_pods.go:89] "metrics-server-569cc877fc-x2wb6" [cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:12:03.918791   65980 system_pods.go:89] "storage-provisioner" [93e046a1-3867-44e1-8a4f-cf0eba6dfd6b] Running
	I0429 20:12:03.918800   65980 system_pods.go:126] duration metric: took 204.399385ms to wait for k8s-apps to be running ...
	I0429 20:12:03.918809   65980 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:12:03.918851   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:03.937870   65980 system_svc.go:56] duration metric: took 19.05503ms WaitForService to wait for kubelet
	I0429 20:12:03.937892   65980 kubeadm.go:576] duration metric: took 4.169607456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:12:03.937910   65980 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:12:04.116479   65980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:12:04.116504   65980 node_conditions.go:123] node cpu capacity is 2
	I0429 20:12:04.116513   65980 node_conditions.go:105] duration metric: took 178.599246ms to run NodePressure ...
	I0429 20:12:04.116524   65980 start.go:240] waiting for startup goroutines ...
	I0429 20:12:04.116530   65980 start.go:245] waiting for cluster config update ...
	I0429 20:12:04.116540   65980 start.go:254] writing updated cluster config ...
	I0429 20:12:04.116799   65980 ssh_runner.go:195] Run: rm -f paused
	I0429 20:12:04.167803   65980 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:12:04.169861   65980 out.go:177] * Done! kubectl is now configured to use "embed-certs-161370" cluster and "default" namespace by default
	I0429 20:12:09.853929   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:12:09.854036   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:12:09.856141   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:12:09.856215   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:12:09.856314   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:12:09.856435   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:12:09.856529   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:12:09.856638   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:12:09.858658   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:12:09.858759   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:12:09.858821   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:12:09.858914   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:12:09.858967   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:12:09.859049   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:12:09.859118   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:12:09.859197   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:12:09.859311   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:12:09.859435   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:12:09.859548   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:12:09.859605   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:12:09.859678   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:12:09.859766   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:12:09.859856   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:12:09.859947   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:12:09.860025   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:12:09.860149   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:12:09.860228   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:12:09.860289   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:12:09.860390   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:12:09.862098   66615 out.go:204]   - Booting up control plane ...
	I0429 20:12:09.862211   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:12:09.862298   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:12:09.862360   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:12:09.862484   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:12:09.862720   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:12:09.862794   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:12:09.862882   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863117   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863244   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863470   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863544   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863814   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863895   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864144   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864223   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864393   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864408   66615 kubeadm.go:309] 
	I0429 20:12:09.864473   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:12:09.864526   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:12:09.864543   66615 kubeadm.go:309] 
	I0429 20:12:09.864589   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:12:09.864638   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:12:09.864779   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:12:09.864789   66615 kubeadm.go:309] 
	I0429 20:12:09.864911   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:12:09.864971   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:12:09.865026   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:12:09.865033   66615 kubeadm.go:309] 
	I0429 20:12:09.865150   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:12:09.865228   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:12:09.865241   66615 kubeadm.go:309] 
	I0429 20:12:09.865404   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:12:09.865538   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:12:09.865651   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:12:09.865755   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:12:09.865828   66615 kubeadm.go:309] 
	W0429 20:12:09.865940   66615 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0429 20:12:09.866027   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:12:10.987703   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.121642991s)
	I0429 20:12:10.987802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:11.007295   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:12:11.020772   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:12:11.020790   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:12:11.020838   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:12:11.033334   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:12:11.033405   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:12:11.044565   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:12:11.057087   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:12:11.057143   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:12:11.069908   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.082866   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:12:11.082920   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.096659   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:12:11.110106   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:12:11.110166   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:12:11.124952   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:12:11.396252   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:14:07.831448   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:14:07.831556   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:14:07.833111   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:14:07.833179   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:14:07.833288   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:14:07.833421   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:14:07.833530   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:14:07.833616   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:14:07.835518   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:14:07.835623   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:14:07.835703   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:14:07.835776   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:14:07.835839   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:14:07.835893   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:14:07.835957   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:14:07.836039   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:14:07.836129   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:14:07.836238   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:14:07.836350   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:14:07.836394   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:14:07.836441   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:14:07.836488   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:14:07.836559   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:14:07.836637   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:14:07.836683   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:14:07.836778   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:14:07.836854   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:14:07.836895   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:14:07.836950   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:14:07.838553   66615 out.go:204]   - Booting up control plane ...
	I0429 20:14:07.838635   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:14:07.838718   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:14:07.838836   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:14:07.838918   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:14:07.839069   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:14:07.839126   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:14:07.839180   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839369   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839450   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839654   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839779   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840008   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840076   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840322   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840380   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840571   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840594   66615 kubeadm.go:309] 
	I0429 20:14:07.840637   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:14:07.840673   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:14:07.840682   66615 kubeadm.go:309] 
	I0429 20:14:07.840715   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:14:07.840745   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:14:07.840844   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:14:07.840857   66615 kubeadm.go:309] 
	I0429 20:14:07.840969   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:14:07.841022   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:14:07.841073   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:14:07.841083   66615 kubeadm.go:309] 
	I0429 20:14:07.841184   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:14:07.841315   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:14:07.841325   66615 kubeadm.go:309] 
	I0429 20:14:07.841454   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:14:07.841550   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:14:07.841632   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:14:07.841697   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:14:07.841760   66615 kubeadm.go:393] duration metric: took 8m1.501853767s to StartCluster
	I0429 20:14:07.841781   66615 kubeadm.go:309] 
	I0429 20:14:07.841800   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:14:07.841853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:14:07.898194   66615 cri.go:89] found id: ""
	I0429 20:14:07.898227   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.898237   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:14:07.898244   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:14:07.898316   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:14:07.938873   66615 cri.go:89] found id: ""
	I0429 20:14:07.938903   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.938914   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:14:07.938921   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:14:07.938979   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:14:07.980523   66615 cri.go:89] found id: ""
	I0429 20:14:07.980551   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.980559   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:14:07.980565   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:14:07.980612   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:14:08.021334   66615 cri.go:89] found id: ""
	I0429 20:14:08.021366   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.021377   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:14:08.021389   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:14:08.021446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:14:08.060598   66615 cri.go:89] found id: ""
	I0429 20:14:08.060636   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.060648   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:14:08.060655   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:14:08.060716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:14:08.101689   66615 cri.go:89] found id: ""
	I0429 20:14:08.101715   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.101723   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:14:08.101729   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:14:08.101786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:14:08.143295   66615 cri.go:89] found id: ""
	I0429 20:14:08.143333   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.143344   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:14:08.143351   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:14:08.143408   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:14:08.190555   66615 cri.go:89] found id: ""
	I0429 20:14:08.190585   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.190597   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:14:08.190609   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:14:08.190624   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:14:08.251830   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:14:08.251870   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:14:08.306512   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:14:08.306554   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:14:08.323258   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:14:08.323283   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:14:08.405539   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:14:08.405568   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:14:08.405583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0429 20:14:08.514288   66615 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0429 20:14:08.514344   66615 out.go:239] * 
	W0429 20:14:08.514431   66615 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.514465   66615 out.go:239] * 
	W0429 20:14:08.515399   66615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:14:08.518578   66615 out.go:177] 
	W0429 20:14:08.519725   66615 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.519782   66615 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0429 20:14:08.519816   66615 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0429 20:14:08.521068   66615 out.go:177] 
	
	
	==> CRI-O <==
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.350798882Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4bc25e7d837b61d7d50a1dd053ffb81a7f6d7f77c27275ac7d1dad349bcac838,Verbose:false,}" file="otel-collector/interceptors.go:62" id=4c4f64db-922e-4a72-886c-a99b25bc7971 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.350916584Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4bc25e7d837b61d7d50a1dd053ffb81a7f6d7f77c27275ac7d1dad349bcac838,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1714421500394126350,StartedAt:1714421500540716415,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7fa20f1275f39c0dbd2f28238557da,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/9c7fa20f1275f39c0dbd2f28238557da/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/9c7fa20f1275f39c0dbd2f28238557da/containers/kube-apiserver/d3746b22,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Conta
inerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-embed-certs-161370_9c7fa20f1275f39c0dbd2f28238557da/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=4c4f64db-922e-4a72-886c-a99b25bc7971 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.392261635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=435a1f8b-a82b-4a15-92c1-1dc46acfcd46 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.392363369Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=435a1f8b-a82b-4a15-92c1-1dc46acfcd46 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.394030177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=530160e9-46b1-41c7-9996-09258af30e57 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.394439277Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422066394414926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=530160e9-46b1-41c7-9996-09258af30e57 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.395182501Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a916f2f0-2cf2-4ca4-9f97-6500894878c2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.395262607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a916f2f0-2cf2-4ca4-9f97-6500894878c2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.395438980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09da5e7f79c2ee756da6d7c8cf7a9ec0b14bdc89660de0be5a1789c9837fd07,PodSandboxId:13592af169e448e5456d1d29dc85bd4eedcec210384f808c4c2539706bd88a20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521967460227,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr6bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d14ff20-6dab-4c02-b91c-0a1e326f1593,},Annotations:map[string]string{io.kubernetes.container.hash: 91deb564,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597fd296f206b84c8ad021a50f3526c8b69470bcd90ac39ae7a40306854ac9ab,PodSandboxId:6c2a642e889be4553156c6036285037a1636412f1eae02d2922255a6918550aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521752653889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7z6zv,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 422451a2-615d-4bf8-8de8-d5fa5805219f,},Annotations:map[string]string{io.kubernetes.container.hash: 87b952de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33707b709281cf6d469a14ea10a8cb2fb05aef0c451ee7f796955d8b2427f31c,PodSandboxId:bdfdecde861bfd2cf502c71fcd70c011782565210ed637fce8516949fd5dc98c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNI
NG,CreatedAt:1714421521336688944,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wq48j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b3b23ef-b5b4-4754-bc44-73e1d51a18d7,},Annotations:map[string]string{io.kubernetes.container.hash: ffdf8adb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4c99c955ac14fd43f2860e60f90fbf6dc91c1a2bbbc6b25a4d5172dd64b414c,PodSandboxId:6161d1c61f8548c2bb80e7a990b2f11c843286c32dcf6abeebe77d1a04416ec5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17144215212
90879391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93e046a1-3867-44e1-8a4f-cf0eba6dfd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a656cc1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:033c21bf724950eb59ec37c01840cbebc97390462ad40103725deafe34097f6b,PodSandboxId:d2fe13c2e877279ab6de3e9b96103e8eea857ea9db5192cf6171e22de3109a13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421500465616080,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36865aa59e33dd34dad6ead2415cbd18,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c91d1f0aa2317ec388dc984455f7fb8ba9122c34b93beeab627bb543f4130e5,PodSandboxId:5aa89d2eb3f7230b08418ea015fb01e19fa14a7215fc209c1091595934e5df5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421500432041375,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7ea45965b21a7a2a5f5deef15a1c2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 62a4f4c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0c3731b411f006dfdb676571885a831207d11b62ed4444e5a6c3e610ec16f1,PodSandboxId:08d9c94bbc65edcd3a4b048af68505b557a2a0af7d162ccffc74067949576229,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421500381505262,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7ec996aacb64787a59cb6e9e29694d7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc25e7d837b61d7d50a1dd053ffb81a7f6d7f77c27275ac7d1dad349bcac838,PodSandboxId:9b4013dcd5ac92b83f45f2965cf266016c5274d6239a53d06bd2ca7a432fb501,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421500327618152,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7fa20f1275f39c0dbd2f28238557da,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a916f2f0-2cf2-4ca4-9f97-6500894878c2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.420207701Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=4a67f985-e96b-4e1a-8b26-5888ae412e4a name=/runtime.v1.RuntimeService/Status
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.420316012Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=4a67f985-e96b-4e1a-8b26-5888ae412e4a name=/runtime.v1.RuntimeService/Status
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.440289171Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76c5e926-b0f6-4760-ae25-4eae06cccb5c name=/runtime.v1.RuntimeService/Version
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.440401145Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76c5e926-b0f6-4760-ae25-4eae06cccb5c name=/runtime.v1.RuntimeService/Version
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.442421222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac8bfc75-19fd-4930-9c32-f5cc750dc5aa name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.443005298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422066442965235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac8bfc75-19fd-4930-9c32-f5cc750dc5aa name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.443701467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a893694-70ca-4609-a44b-6a29809ef25f name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.444018633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a893694-70ca-4609-a44b-6a29809ef25f name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.444215361Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09da5e7f79c2ee756da6d7c8cf7a9ec0b14bdc89660de0be5a1789c9837fd07,PodSandboxId:13592af169e448e5456d1d29dc85bd4eedcec210384f808c4c2539706bd88a20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521967460227,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr6bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d14ff20-6dab-4c02-b91c-0a1e326f1593,},Annotations:map[string]string{io.kubernetes.container.hash: 91deb564,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597fd296f206b84c8ad021a50f3526c8b69470bcd90ac39ae7a40306854ac9ab,PodSandboxId:6c2a642e889be4553156c6036285037a1636412f1eae02d2922255a6918550aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521752653889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7z6zv,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 422451a2-615d-4bf8-8de8-d5fa5805219f,},Annotations:map[string]string{io.kubernetes.container.hash: 87b952de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33707b709281cf6d469a14ea10a8cb2fb05aef0c451ee7f796955d8b2427f31c,PodSandboxId:bdfdecde861bfd2cf502c71fcd70c011782565210ed637fce8516949fd5dc98c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNI
NG,CreatedAt:1714421521336688944,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wq48j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b3b23ef-b5b4-4754-bc44-73e1d51a18d7,},Annotations:map[string]string{io.kubernetes.container.hash: ffdf8adb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4c99c955ac14fd43f2860e60f90fbf6dc91c1a2bbbc6b25a4d5172dd64b414c,PodSandboxId:6161d1c61f8548c2bb80e7a990b2f11c843286c32dcf6abeebe77d1a04416ec5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17144215212
90879391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93e046a1-3867-44e1-8a4f-cf0eba6dfd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a656cc1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:033c21bf724950eb59ec37c01840cbebc97390462ad40103725deafe34097f6b,PodSandboxId:d2fe13c2e877279ab6de3e9b96103e8eea857ea9db5192cf6171e22de3109a13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421500465616080,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36865aa59e33dd34dad6ead2415cbd18,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c91d1f0aa2317ec388dc984455f7fb8ba9122c34b93beeab627bb543f4130e5,PodSandboxId:5aa89d2eb3f7230b08418ea015fb01e19fa14a7215fc209c1091595934e5df5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421500432041375,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7ea45965b21a7a2a5f5deef15a1c2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 62a4f4c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0c3731b411f006dfdb676571885a831207d11b62ed4444e5a6c3e610ec16f1,PodSandboxId:08d9c94bbc65edcd3a4b048af68505b557a2a0af7d162ccffc74067949576229,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421500381505262,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7ec996aacb64787a59cb6e9e29694d7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc25e7d837b61d7d50a1dd053ffb81a7f6d7f77c27275ac7d1dad349bcac838,PodSandboxId:9b4013dcd5ac92b83f45f2965cf266016c5274d6239a53d06bd2ca7a432fb501,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421500327618152,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7fa20f1275f39c0dbd2f28238557da,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a893694-70ca-4609-a44b-6a29809ef25f name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.487433264Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8deee086-6268-4904-b0ad-a4e7253b7087 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.487524928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8deee086-6268-4904-b0ad-a4e7253b7087 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.489618642Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3edcc8e-2002-427a-8a2e-1fe41a28a99d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.490295178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422066490264428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3edcc8e-2002-427a-8a2e-1fe41a28a99d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.491021461Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b1c90c4-ee22-4231-8866-2102224fd1b5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.491107071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b1c90c4-ee22-4231-8866-2102224fd1b5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:21:06 embed-certs-161370 crio[726]: time="2024-04-29 20:21:06.491299038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09da5e7f79c2ee756da6d7c8cf7a9ec0b14bdc89660de0be5a1789c9837fd07,PodSandboxId:13592af169e448e5456d1d29dc85bd4eedcec210384f808c4c2539706bd88a20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521967460227,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr6bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d14ff20-6dab-4c02-b91c-0a1e326f1593,},Annotations:map[string]string{io.kubernetes.container.hash: 91deb564,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597fd296f206b84c8ad021a50f3526c8b69470bcd90ac39ae7a40306854ac9ab,PodSandboxId:6c2a642e889be4553156c6036285037a1636412f1eae02d2922255a6918550aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521752653889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7z6zv,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 422451a2-615d-4bf8-8de8-d5fa5805219f,},Annotations:map[string]string{io.kubernetes.container.hash: 87b952de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33707b709281cf6d469a14ea10a8cb2fb05aef0c451ee7f796955d8b2427f31c,PodSandboxId:bdfdecde861bfd2cf502c71fcd70c011782565210ed637fce8516949fd5dc98c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNI
NG,CreatedAt:1714421521336688944,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wq48j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b3b23ef-b5b4-4754-bc44-73e1d51a18d7,},Annotations:map[string]string{io.kubernetes.container.hash: ffdf8adb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4c99c955ac14fd43f2860e60f90fbf6dc91c1a2bbbc6b25a4d5172dd64b414c,PodSandboxId:6161d1c61f8548c2bb80e7a990b2f11c843286c32dcf6abeebe77d1a04416ec5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17144215212
90879391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93e046a1-3867-44e1-8a4f-cf0eba6dfd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a656cc1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:033c21bf724950eb59ec37c01840cbebc97390462ad40103725deafe34097f6b,PodSandboxId:d2fe13c2e877279ab6de3e9b96103e8eea857ea9db5192cf6171e22de3109a13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421500465616080,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36865aa59e33dd34dad6ead2415cbd18,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c91d1f0aa2317ec388dc984455f7fb8ba9122c34b93beeab627bb543f4130e5,PodSandboxId:5aa89d2eb3f7230b08418ea015fb01e19fa14a7215fc209c1091595934e5df5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421500432041375,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7ea45965b21a7a2a5f5deef15a1c2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 62a4f4c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0c3731b411f006dfdb676571885a831207d11b62ed4444e5a6c3e610ec16f1,PodSandboxId:08d9c94bbc65edcd3a4b048af68505b557a2a0af7d162ccffc74067949576229,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421500381505262,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7ec996aacb64787a59cb6e9e29694d7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc25e7d837b61d7d50a1dd053ffb81a7f6d7f77c27275ac7d1dad349bcac838,PodSandboxId:9b4013dcd5ac92b83f45f2965cf266016c5274d6239a53d06bd2ca7a432fb501,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421500327618152,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7fa20f1275f39c0dbd2f28238557da,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b1c90c4-ee22-4231-8866-2102224fd1b5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f09da5e7f79c2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   13592af169e44       coredns-7db6d8ff4d-rr6bd
	597fd296f206b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   6c2a642e889be       coredns-7db6d8ff4d-7z6zv
	33707b709281c       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   bdfdecde861bf       kube-proxy-wq48j
	d4c99c955ac14       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   6161d1c61f854       storage-provisioner
	033c21bf72495       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   d2fe13c2e8772       kube-scheduler-embed-certs-161370
	2c91d1f0aa231       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   5aa89d2eb3f72       etcd-embed-certs-161370
	9a0c3731b411f       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   08d9c94bbc65e       kube-controller-manager-embed-certs-161370
	4bc25e7d837b6       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   9b4013dcd5ac9       kube-apiserver-embed-certs-161370
	
	
	==> coredns [597fd296f206b84c8ad021a50f3526c8b69470bcd90ac39ae7a40306854ac9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f09da5e7f79c2ee756da6d7c8cf7a9ec0b14bdc89660de0be5a1789c9837fd07] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-161370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-161370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=embed-certs-161370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T20_11_46_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:11:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-161370
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:20:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:17:13 +0000   Mon, 29 Apr 2024 20:11:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:17:13 +0000   Mon, 29 Apr 2024 20:11:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:17:13 +0000   Mon, 29 Apr 2024 20:11:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:17:13 +0000   Mon, 29 Apr 2024 20:11:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.184
	  Hostname:    embed-certs-161370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b833b658df1947c7910ffce5e3af6ef9
	  System UUID:                b833b658-df19-47c7-910f-fce5e3af6ef9
	  Boot ID:                    e66f7a71-4f64-4c86-bf6e-31a74b9aadc6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-7z6zv                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-rr6bd                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-161370                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-161370             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-161370    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-proxy-wq48j                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-161370             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-x2wb6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  Starting                 9m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node embed-certs-161370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node embed-certs-161370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node embed-certs-161370 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node embed-certs-161370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node embed-certs-161370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node embed-certs-161370 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node embed-certs-161370 event: Registered Node embed-certs-161370 in Controller
	
	
	==> dmesg <==
	[  +0.043855] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.981581] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.673771] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.750022] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.105719] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.069611] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064816] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.197537] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.167047] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.345200] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +5.227718] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +0.069840] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.973645] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +5.609639] kauditd_printk_skb: 97 callbacks suppressed
	[Apr29 20:07] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.013986] kauditd_printk_skb: 22 callbacks suppressed
	[Apr29 20:11] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.935620] systemd-fstab-generator[3583]: Ignoring "noauto" option for root device
	[  +4.436884] kauditd_printk_skb: 57 callbacks suppressed
	[  +2.122430] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[ +13.977684] systemd-fstab-generator[4108]: Ignoring "noauto" option for root device
	[  +0.084109] kauditd_printk_skb: 14 callbacks suppressed
	[Apr29 20:13] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [2c91d1f0aa2317ec388dc984455f7fb8ba9122c34b93beeab627bb543f4130e5] <==
	{"level":"info","ts":"2024-04-29T20:11:40.840879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f switched to configuration voters=(13775646200422885695)"}
	{"level":"info","ts":"2024-04-29T20:11:40.840997Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dfaeaf2ad25a061e","local-member-id":"bf2ced3b97aa693f","added-peer-id":"bf2ced3b97aa693f","added-peer-peer-urls":["https://192.168.50.184:2380"]}
	{"level":"info","ts":"2024-04-29T20:11:40.869599Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T20:11:40.870337Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"bf2ced3b97aa693f","initial-advertise-peer-urls":["https://192.168.50.184:2380"],"listen-peer-urls":["https://192.168.50.184:2380"],"advertise-client-urls":["https://192.168.50.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T20:11:40.870483Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T20:11:40.870829Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.184:2380"}
	{"level":"info","ts":"2024-04-29T20:11:40.872836Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.184:2380"}
	{"level":"info","ts":"2024-04-29T20:11:41.78343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T20:11:41.783507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T20:11:41.783529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f received MsgPreVoteResp from bf2ced3b97aa693f at term 1"}
	{"level":"info","ts":"2024-04-29T20:11:41.783541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T20:11:41.783547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f received MsgVoteResp from bf2ced3b97aa693f at term 2"}
	{"level":"info","ts":"2024-04-29T20:11:41.783554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f became leader at term 2"}
	{"level":"info","ts":"2024-04-29T20:11:41.783571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bf2ced3b97aa693f elected leader bf2ced3b97aa693f at term 2"}
	{"level":"info","ts":"2024-04-29T20:11:41.785588Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:11:41.787044Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"bf2ced3b97aa693f","local-member-attributes":"{Name:embed-certs-161370 ClientURLs:[https://192.168.50.184:2379]}","request-path":"/0/members/bf2ced3b97aa693f/attributes","cluster-id":"dfaeaf2ad25a061e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T20:11:41.787242Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:11:41.787264Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:11:41.787407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dfaeaf2ad25a061e","local-member-id":"bf2ced3b97aa693f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:11:41.788528Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:11:41.78859Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:11:41.790445Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T20:11:41.787628Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T20:11:41.790599Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T20:11:41.792128Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.184:2379"}
	
	
	==> kernel <==
	 20:21:06 up 14 min,  0 users,  load average: 0.55, 0.37, 0.31
	Linux embed-certs-161370 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4bc25e7d837b61d7d50a1dd053ffb81a7f6d7f77c27275ac7d1dad349bcac838] <==
	I0429 20:15:01.810089       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:16:43.277014       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:16:43.277205       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0429 20:16:44.278281       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:16:44.278420       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:16:44.278432       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:16:44.278514       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:16:44.278592       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:16:44.279874       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:17:44.278930       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:17:44.279063       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:17:44.279093       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:17:44.280119       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:17:44.280210       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:17:44.280220       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:19:44.279625       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:19:44.279982       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:19:44.280018       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:19:44.281039       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:19:44.281170       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:19:44.281220       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9a0c3731b411f006dfdb676571885a831207d11b62ed4444e5a6c3e610ec16f1] <==
	I0429 20:15:30.234977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="101.504µs"
	E0429 20:15:58.785326       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:15:59.225306       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:16:28.793688       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:16:29.234989       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:16:58.801391       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:16:59.243486       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:17:28.807232       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:17:29.252213       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:17:58.813098       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:17:59.261550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0429 20:18:09.239493       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="340.693µs"
	I0429 20:18:24.240997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="1.499224ms"
	E0429 20:18:28.820620       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:18:29.271243       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:18:58.831255       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:18:59.281557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:19:28.836504       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:19:29.292024       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:19:58.845624       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:19:59.301528       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:20:28.852635       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:20:29.311445       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:20:58.860943       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:20:59.322615       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [33707b709281cf6d469a14ea10a8cb2fb05aef0c451ee7f796955d8b2427f31c] <==
	I0429 20:12:01.958964       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:12:01.989284       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.184"]
	I0429 20:12:02.150512       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:12:02.150587       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:12:02.150613       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:12:02.159576       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:12:02.159886       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:12:02.159942       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:12:02.161088       1 config.go:192] "Starting service config controller"
	I0429 20:12:02.161132       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:12:02.161161       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:12:02.161165       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:12:02.164562       1 config.go:319] "Starting node config controller"
	I0429 20:12:02.164603       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:12:02.261288       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 20:12:02.261358       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:12:02.265037       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [033c21bf724950eb59ec37c01840cbebc97390462ad40103725deafe34097f6b] <==
	W0429 20:11:44.112830       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 20:11:44.112991       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 20:11:44.152241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 20:11:44.153776       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 20:11:44.162873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 20:11:44.163289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 20:11:44.280078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 20:11:44.280175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 20:11:44.422447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 20:11:44.423557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 20:11:44.473712       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 20:11:44.473949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 20:11:44.474109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 20:11:44.474360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 20:11:44.499225       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 20:11:44.499409       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 20:11:44.529610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 20:11:44.530194       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 20:11:44.657814       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 20:11:44.657963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 20:11:44.687682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 20:11:44.687884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 20:11:44.704566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 20:11:44.705145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0429 20:11:46.768678       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:18:46 embed-certs-161370 kubelet[3912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:18:46 embed-certs-161370 kubelet[3912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:18:46 embed-certs-161370 kubelet[3912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:18:46 embed-certs-161370 kubelet[3912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:18:47 embed-certs-161370 kubelet[3912]: E0429 20:18:47.215944    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:18:59 embed-certs-161370 kubelet[3912]: E0429 20:18:59.216578    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:19:14 embed-certs-161370 kubelet[3912]: E0429 20:19:14.220015    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:19:29 embed-certs-161370 kubelet[3912]: E0429 20:19:29.216329    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:19:40 embed-certs-161370 kubelet[3912]: E0429 20:19:40.217135    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:19:46 embed-certs-161370 kubelet[3912]: E0429 20:19:46.233547    3912 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:19:46 embed-certs-161370 kubelet[3912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:19:46 embed-certs-161370 kubelet[3912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:19:46 embed-certs-161370 kubelet[3912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:19:46 embed-certs-161370 kubelet[3912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:19:52 embed-certs-161370 kubelet[3912]: E0429 20:19:52.216813    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:20:04 embed-certs-161370 kubelet[3912]: E0429 20:20:04.221575    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:20:18 embed-certs-161370 kubelet[3912]: E0429 20:20:18.216922    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:20:32 embed-certs-161370 kubelet[3912]: E0429 20:20:32.218110    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:20:45 embed-certs-161370 kubelet[3912]: E0429 20:20:45.216902    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:20:46 embed-certs-161370 kubelet[3912]: E0429 20:20:46.236337    3912 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:20:46 embed-certs-161370 kubelet[3912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:20:46 embed-certs-161370 kubelet[3912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:20:46 embed-certs-161370 kubelet[3912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:20:46 embed-certs-161370 kubelet[3912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:20:58 embed-certs-161370 kubelet[3912]: E0429 20:20:58.218137    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	
	
	==> storage-provisioner [d4c99c955ac14fd43f2860e60f90fbf6dc91c1a2bbbc6b25a4d5172dd64b414c] <==
	I0429 20:12:01.553195       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 20:12:01.611976       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 20:12:01.612052       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 20:12:01.659691       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 20:12:01.663401       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-161370_9284d8e6-3cb5-4ea7-941d-d82a438201d0!
	I0429 20:12:01.686296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2aacdc58-addd-4906-9ef8-55619688bc13", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-161370_9284d8e6-3cb5-4ea7-941d-d82a438201d0 became leader
	I0429 20:12:01.764535       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-161370_9284d8e6-3cb5-4ea7-941d-d82a438201d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-161370 -n embed-certs-161370
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-161370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-x2wb6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-161370 describe pod metrics-server-569cc877fc-x2wb6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-161370 describe pod metrics-server-569cc877fc-x2wb6: exit status 1 (61.513762ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-x2wb6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-161370 describe pod metrics-server-569cc877fc-x2wb6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
E0429 20:17:03.952558   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
E0429 20:17:48.914648   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
E0429 20:19:00.893843   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
E0429 20:22:48.915077   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-919612 -n old-k8s-version-919612
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 2 (250.713984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-919612" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 2 (244.127173ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-919612 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-919612 logs -n 25: (1.690793841s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:55 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| ssh     | cert-options-437743 ssh                                | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-437743 -- sudo                         | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-437743                                 | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-161370            | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-456788             | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-193781 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | disable-driver-mounts-193781                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 20:00 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-866143  | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-161370                 | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-919612        | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-456788                  | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:01 UTC | 29 Apr 24 20:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-919612             | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-866143       | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:10 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:02:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:02:45.502823   66875 out.go:291] Setting OutFile to fd 1 ...
	I0429 20:02:45.503073   66875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:45.503084   66875 out.go:304] Setting ErrFile to fd 2...
	I0429 20:02:45.503089   66875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:45.503272   66875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 20:02:45.503808   66875 out.go:298] Setting JSON to false
	I0429 20:02:45.504681   66875 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6263,"bootTime":1714414702,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 20:02:45.504736   66875 start.go:139] virtualization: kvm guest
	I0429 20:02:45.507344   66875 out.go:177] * [default-k8s-diff-port-866143] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 20:02:45.508715   66875 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:02:45.508745   66875 notify.go:220] Checking for updates...
	I0429 20:02:45.510093   66875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:02:45.512200   66875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:02:45.513622   66875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:02:45.514915   66875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 20:02:45.516228   66875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:02:45.517923   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:02:45.518366   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:45.518446   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:45.533484   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I0429 20:02:45.533901   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:45.534427   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:02:45.534448   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:45.534822   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:45.535013   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:02:45.535292   66875 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:02:45.535595   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:45.535639   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:45.551065   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0429 20:02:45.551469   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:45.551906   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:02:45.551928   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:45.552239   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:45.552451   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:02:45.584714   66875 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 20:02:45.586089   66875 start.go:297] selected driver: kvm2
	I0429 20:02:45.586117   66875 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:45.586250   66875 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:02:45.587043   66875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:45.587136   66875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 20:02:45.601799   66875 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 20:02:45.602171   66875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:02:45.602246   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:02:45.602265   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:02:45.602323   66875 start.go:340] cluster config:
	{Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:45.602444   66875 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:45.605081   66875 out.go:177] * Starting "default-k8s-diff-port-866143" primary control-plane node in "default-k8s-diff-port-866143" cluster
	I0429 20:02:42.794291   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:45.866333   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:45.606536   66875 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:02:45.606590   66875 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 20:02:45.606602   66875 cache.go:56] Caching tarball of preloaded images
	I0429 20:02:45.606687   66875 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 20:02:45.606704   66875 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 20:02:45.606799   66875 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:02:45.606986   66875 start.go:360] acquireMachinesLock for default-k8s-diff-port-866143: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:02:51.946332   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:55.018269   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:01.098329   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:04.170389   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:10.250316   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:13.322292   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:19.402290   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:22.474356   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:28.554348   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:31.626416   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:37.706282   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:40.778321   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:46.858318   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:49.930321   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:56.010331   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:59.082336   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:05.162299   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:08.234328   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:14.314352   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:17.386337   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:23.466350   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:26.538284   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:32.618297   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:35.690319   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:41.770372   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:44.842280   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:50.922320   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:53.994336   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:00.074389   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:03.146353   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:09.226369   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:12.298407   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:15.302828   66218 start.go:364] duration metric: took 4m7.483402316s to acquireMachinesLock for "no-preload-456788"
	I0429 20:05:15.302889   66218 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:15.302896   66218 fix.go:54] fixHost starting: 
	I0429 20:05:15.303301   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:15.303337   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:15.319582   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I0429 20:05:15.320057   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:15.320597   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:05:15.320620   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:15.321017   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:15.321272   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:15.321472   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:05:15.323137   66218 fix.go:112] recreateIfNeeded on no-preload-456788: state=Stopped err=<nil>
	I0429 20:05:15.323171   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	W0429 20:05:15.323346   66218 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:15.325520   66218 out.go:177] * Restarting existing kvm2 VM for "no-preload-456788" ...
	I0429 20:05:15.327122   66218 main.go:141] libmachine: (no-preload-456788) Calling .Start
	I0429 20:05:15.327314   66218 main.go:141] libmachine: (no-preload-456788) Ensuring networks are active...
	I0429 20:05:15.328136   66218 main.go:141] libmachine: (no-preload-456788) Ensuring network default is active
	I0429 20:05:15.328437   66218 main.go:141] libmachine: (no-preload-456788) Ensuring network mk-no-preload-456788 is active
	I0429 20:05:15.328771   66218 main.go:141] libmachine: (no-preload-456788) Getting domain xml...
	I0429 20:05:15.329442   66218 main.go:141] libmachine: (no-preload-456788) Creating domain...
	I0429 20:05:16.534970   66218 main.go:141] libmachine: (no-preload-456788) Waiting to get IP...
	I0429 20:05:16.536019   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:16.536375   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:16.536444   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:16.536369   67416 retry.go:31] will retry after 240.743093ms: waiting for machine to come up
	I0429 20:05:16.779123   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:16.779623   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:16.779659   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:16.779558   67416 retry.go:31] will retry after 355.595109ms: waiting for machine to come up
	I0429 20:05:17.137145   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:17.137512   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:17.137542   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:17.137480   67416 retry.go:31] will retry after 347.905643ms: waiting for machine to come up
	I0429 20:05:17.487174   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:17.487566   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:17.487597   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:17.487543   67416 retry.go:31] will retry after 547.016094ms: waiting for machine to come up
	I0429 20:05:15.300221   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:15.300278   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:05:15.300613   65980 buildroot.go:166] provisioning hostname "embed-certs-161370"
	I0429 20:05:15.300652   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:05:15.300910   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:05:15.302677   65980 machine.go:97] duration metric: took 4m37.41104152s to provisionDockerMachine
	I0429 20:05:15.302722   65980 fix.go:56] duration metric: took 4m37.432092484s for fixHost
	I0429 20:05:15.302728   65980 start.go:83] releasing machines lock for "embed-certs-161370", held for 4m37.432113341s
	W0429 20:05:15.302753   65980 start.go:713] error starting host: provision: host is not running
	W0429 20:05:15.302871   65980 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0429 20:05:15.302882   65980 start.go:728] Will try again in 5 seconds ...
	I0429 20:05:18.036617   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:18.037042   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:18.037104   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:18.037025   67416 retry.go:31] will retry after 465.100134ms: waiting for machine to come up
	I0429 20:05:18.503846   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:18.504326   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:18.504352   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:18.504283   67416 retry.go:31] will retry after 672.007195ms: waiting for machine to come up
	I0429 20:05:19.178173   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:19.178570   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:19.178604   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:19.178516   67416 retry.go:31] will retry after 744.052058ms: waiting for machine to come up
	I0429 20:05:19.924561   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:19.925029   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:19.925060   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:19.925002   67416 retry.go:31] will retry after 1.06511003s: waiting for machine to come up
	I0429 20:05:20.991584   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:20.992015   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:20.992046   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:20.991980   67416 retry.go:31] will retry after 1.677065765s: waiting for machine to come up
	I0429 20:05:22.671760   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:22.672123   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:22.672149   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:22.672085   67416 retry.go:31] will retry after 1.979191189s: waiting for machine to come up
	I0429 20:05:20.303964   65980 start.go:360] acquireMachinesLock for embed-certs-161370: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:05:24.654246   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:24.654711   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:24.654735   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:24.654663   67416 retry.go:31] will retry after 1.839551716s: waiting for machine to come up
	I0429 20:05:26.496511   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:26.496982   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:26.497017   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:26.496939   67416 retry.go:31] will retry after 3.505979368s: waiting for machine to come up
	I0429 20:05:30.006590   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:30.006916   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:30.006951   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:30.006871   67416 retry.go:31] will retry after 3.811785899s: waiting for machine to come up
	I0429 20:05:35.155600   66615 start.go:364] duration metric: took 3m25.093405289s to acquireMachinesLock for "old-k8s-version-919612"
	I0429 20:05:35.155655   66615 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:35.155661   66615 fix.go:54] fixHost starting: 
	I0429 20:05:35.155999   66615 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:35.156034   66615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:35.173332   66615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0429 20:05:35.173754   66615 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:35.174261   66615 main.go:141] libmachine: Using API Version  1
	I0429 20:05:35.174294   66615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:35.174602   66615 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:35.174797   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:35.174987   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetState
	I0429 20:05:35.176453   66615 fix.go:112] recreateIfNeeded on old-k8s-version-919612: state=Stopped err=<nil>
	I0429 20:05:35.176478   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	W0429 20:05:35.176647   66615 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:35.178966   66615 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-919612" ...
	I0429 20:05:33.823293   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.823787   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has current primary IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.823806   66218 main.go:141] libmachine: (no-preload-456788) Found IP for machine: 192.168.39.235
	I0429 20:05:33.823830   66218 main.go:141] libmachine: (no-preload-456788) Reserving static IP address...
	I0429 20:05:33.824243   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "no-preload-456788", mac: "52:54:00:15:ae:18", ip: "192.168.39.235"} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.824279   66218 main.go:141] libmachine: (no-preload-456788) DBG | skip adding static IP to network mk-no-preload-456788 - found existing host DHCP lease matching {name: "no-preload-456788", mac: "52:54:00:15:ae:18", ip: "192.168.39.235"}
	I0429 20:05:33.824293   66218 main.go:141] libmachine: (no-preload-456788) Reserved static IP address: 192.168.39.235
	I0429 20:05:33.824308   66218 main.go:141] libmachine: (no-preload-456788) Waiting for SSH to be available...
	I0429 20:05:33.824323   66218 main.go:141] libmachine: (no-preload-456788) DBG | Getting to WaitForSSH function...
	I0429 20:05:33.826371   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.826678   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.826711   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.826808   66218 main.go:141] libmachine: (no-preload-456788) DBG | Using SSH client type: external
	I0429 20:05:33.826836   66218 main.go:141] libmachine: (no-preload-456788) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa (-rw-------)
	I0429 20:05:33.826863   66218 main.go:141] libmachine: (no-preload-456788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:33.826876   66218 main.go:141] libmachine: (no-preload-456788) DBG | About to run SSH command:
	I0429 20:05:33.826887   66218 main.go:141] libmachine: (no-preload-456788) DBG | exit 0
	I0429 20:05:33.954275   66218 main.go:141] libmachine: (no-preload-456788) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:33.954631   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetConfigRaw
	I0429 20:05:33.955387   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:33.957827   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.958210   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.958241   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.958510   66218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/config.json ...
	I0429 20:05:33.958707   66218 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:33.958726   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:33.958952   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:33.961236   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.961535   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.961564   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.961692   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:33.961857   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:33.962015   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:33.962163   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:33.962339   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:33.962522   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:33.962533   66218 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:34.070746   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:34.070777   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.071037   66218 buildroot.go:166] provisioning hostname "no-preload-456788"
	I0429 20:05:34.071062   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.071305   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.073680   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.074016   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.074043   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.074203   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.074374   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.074513   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.074612   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.074743   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.074946   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.074960   66218 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-456788 && echo "no-preload-456788" | sudo tee /etc/hostname
	I0429 20:05:34.198256   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-456788
	
	I0429 20:05:34.198286   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.201126   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.201482   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.201521   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.201710   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.201914   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.202055   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.202219   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.202361   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.202549   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.202573   66218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-456788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-456788/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-456788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:34.324678   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:34.324710   66218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:34.324732   66218 buildroot.go:174] setting up certificates
	I0429 20:05:34.324744   66218 provision.go:84] configureAuth start
	I0429 20:05:34.324756   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.325032   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:34.327623   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.328010   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.328040   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.328149   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.330359   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.330679   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.330711   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.330811   66218 provision.go:143] copyHostCerts
	I0429 20:05:34.330865   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:34.330878   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:34.330939   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:34.331023   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:34.331031   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:34.331054   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:34.331111   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:34.331119   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:34.331148   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:34.331231   66218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.no-preload-456788 san=[127.0.0.1 192.168.39.235 localhost minikube no-preload-456788]
	I0429 20:05:34.444358   66218 provision.go:177] copyRemoteCerts
	I0429 20:05:34.444420   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:34.444445   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.447129   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.447432   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.447466   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.447623   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.447833   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.447999   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.448129   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:34.533465   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:34.561724   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:34.589229   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 20:05:34.617451   66218 provision.go:87] duration metric: took 292.691614ms to configureAuth
	I0429 20:05:34.617491   66218 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:34.617733   66218 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:05:34.617821   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.620628   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.621016   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.621047   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.621257   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.621532   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.621718   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.621892   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.622085   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.622289   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.622305   66218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:34.908031   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:34.908064   66218 machine.go:97] duration metric: took 949.343369ms to provisionDockerMachine
	I0429 20:05:34.908077   66218 start.go:293] postStartSetup for "no-preload-456788" (driver="kvm2")
	I0429 20:05:34.908091   66218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:34.908107   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:34.908452   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:34.908489   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.911574   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.912026   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.912054   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.912219   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.912428   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.912616   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.912743   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:34.997625   66218 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:35.002661   66218 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:35.002687   66218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:35.002753   66218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:35.002822   66218 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:35.002906   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:35.013292   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:35.039830   66218 start.go:296] duration metric: took 131.741312ms for postStartSetup
	I0429 20:05:35.039865   66218 fix.go:56] duration metric: took 19.736969384s for fixHost
	I0429 20:05:35.039905   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.042526   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.042877   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.042912   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.043032   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.043239   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.043416   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.043534   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.043696   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:35.043848   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:35.043858   66218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:05:35.155463   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421135.123583649
	
	I0429 20:05:35.155485   66218 fix.go:216] guest clock: 1714421135.123583649
	I0429 20:05:35.155496   66218 fix.go:229] Guest: 2024-04-29 20:05:35.123583649 +0000 UTC Remote: 2024-04-29 20:05:35.039869068 +0000 UTC m=+267.371683880 (delta=83.714581ms)
	I0429 20:05:35.155514   66218 fix.go:200] guest clock delta is within tolerance: 83.714581ms
	I0429 20:05:35.155519   66218 start.go:83] releasing machines lock for "no-preload-456788", held for 19.852645936s
	I0429 20:05:35.155544   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.155881   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:35.158682   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.159051   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.159070   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.159205   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.159793   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.159987   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.160077   66218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:35.160117   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.160216   66218 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:35.160244   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.162788   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163016   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163226   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.163250   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163372   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.163449   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.163475   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163537   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.163621   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.163723   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.163791   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.163873   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:35.163920   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.164064   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:35.248518   66218 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:35.271479   66218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:35.423324   66218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:35.430371   66218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:35.430445   66218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:35.447860   66218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:35.447886   66218 start.go:494] detecting cgroup driver to use...
	I0429 20:05:35.447949   66218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:35.464102   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:35.479069   66218 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:35.479158   66218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:35.493800   66218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:35.509284   66218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:35.627273   66218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:35.785213   66218 docker.go:233] disabling docker service ...
	I0429 20:05:35.785300   66218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:35.803584   66218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:35.818874   66218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:35.984309   66218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:36.128841   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:36.148237   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:36.172144   66218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:05:36.172243   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.191274   66218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:36.191353   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.209656   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.224474   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.238802   66218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:36.252515   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.264522   66218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.286496   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.299127   66218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:36.310702   66218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:36.310760   66218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:36.336226   66218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:36.348617   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:36.474875   66218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:36.619181   66218 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:36.619257   66218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:36.625401   66218 start.go:562] Will wait 60s for crictl version
	I0429 20:05:36.625475   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:36.630232   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:36.667005   66218 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:36.667093   66218 ssh_runner.go:195] Run: crio --version
	I0429 20:05:36.699758   66218 ssh_runner.go:195] Run: crio --version
	I0429 20:05:36.734406   66218 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:05:36.735853   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:36.738683   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:36.739019   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:36.739049   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:36.739310   66218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:36.745227   66218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:36.760124   66218 kubeadm.go:877] updating cluster {Name:no-preload-456788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:36.760238   66218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:05:36.760278   66218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:36.801389   66218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:05:36.801414   66218 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:05:36.801470   66218 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:36.801508   66218 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:36.801524   66218 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:36.801559   66218 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:36.801580   66218 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.801632   66218 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0429 20:05:36.801687   66218 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:36.801688   66218 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:36.803301   66218 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:36.803300   66218 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.803308   66218 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:36.803382   66218 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:36.956976   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.964957   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.022376   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.025860   66218 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0429 20:05:37.025893   66218 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0429 20:05:37.025915   66218 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.025924   66218 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:37.025962   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.025964   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.072629   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.072688   66218 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0429 20:05:37.072713   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:37.072741   66218 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.072791   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.118610   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0429 20:05:37.118704   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.118720   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.128364   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0429 20:05:37.128474   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:37.161350   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0429 20:05:37.165670   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0429 20:05:37.165693   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0429 20:05:37.165710   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.165754   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.165762   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0429 20:05:37.165779   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:37.167440   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:37.174173   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:37.180560   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:37.715733   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:35.180393   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .Start
	I0429 20:05:35.180576   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring networks are active...
	I0429 20:05:35.181281   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network default is active
	I0429 20:05:35.181678   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network mk-old-k8s-version-919612 is active
	I0429 20:05:35.182102   66615 main.go:141] libmachine: (old-k8s-version-919612) Getting domain xml...
	I0429 20:05:35.182867   66615 main.go:141] libmachine: (old-k8s-version-919612) Creating domain...
	I0429 20:05:36.459478   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting to get IP...
	I0429 20:05:36.460301   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.460751   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.460817   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.460706   67552 retry.go:31] will retry after 280.48781ms: waiting for machine to come up
	I0429 20:05:36.743188   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.743630   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.743658   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.743591   67552 retry.go:31] will retry after 326.238132ms: waiting for machine to come up
	I0429 20:05:37.071146   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.071576   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.071609   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.071527   67552 retry.go:31] will retry after 380.72234ms: waiting for machine to come up
	I0429 20:05:37.453967   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.454435   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.454464   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.454385   67552 retry.go:31] will retry after 593.303053ms: waiting for machine to come up
	I0429 20:05:38.049072   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.049555   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.049587   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.049500   67552 retry.go:31] will retry after 694.752524ms: waiting for machine to come up
	I0429 20:05:38.746542   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.747034   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.747065   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.747002   67552 retry.go:31] will retry after 860.161186ms: waiting for machine to come up
	I0429 20:05:39.609098   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:39.609601   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:39.609634   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:39.609544   67552 retry.go:31] will retry after 726.889681ms: waiting for machine to come up
	I0429 20:05:39.327634   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.161845487s)
	I0429 20:05:39.327673   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.161870572s)
	I0429 20:05:39.327710   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0429 20:05:39.327675   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0429 20:05:39.327737   66218 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:39.327748   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0: (2.16027023s)
	I0429 20:05:39.327805   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:39.327811   66218 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0429 20:05:39.327821   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0: (2.153617598s)
	I0429 20:05:39.327846   66218 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:39.327878   66218 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0429 20:05:39.327891   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (2.147303278s)
	I0429 20:05:39.327910   66218 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:39.327929   66218 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0429 20:05:39.327944   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327954   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.612190652s)
	I0429 20:05:39.327960   66218 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:39.327984   66218 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0429 20:05:39.328035   66218 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:39.328061   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327991   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327886   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.333555   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:39.343257   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:41.263038   66218 ssh_runner.go:235] Completed: which crictl: (1.934889703s)
	I0429 20:05:41.263103   66218 ssh_runner.go:235] Completed: which crictl: (1.93491368s)
	I0429 20:05:41.263121   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:41.263132   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.935299869s)
	I0429 20:05:41.263153   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0: (1.929577799s)
	I0429 20:05:41.263155   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0429 20:05:41.263217   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.919934007s)
	I0429 20:05:41.263221   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0429 20:05:41.263248   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:41.263251   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 20:05:41.263290   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:41.263301   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:41.263343   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:41.263159   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:40.338292   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:40.338823   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:40.338864   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:40.338757   67552 retry.go:31] will retry after 1.310400969s: waiting for machine to come up
	I0429 20:05:41.651107   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:41.651625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:41.651670   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:41.651575   67552 retry.go:31] will retry after 1.769756679s: waiting for machine to come up
	I0429 20:05:43.423326   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:43.423829   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:43.423869   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:43.423790   67552 retry.go:31] will retry after 1.748237944s: waiting for machine to come up
	I0429 20:05:44.084051   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.820737476s)
	I0429 20:05:44.084139   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.820774517s)
	I0429 20:05:44.084167   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.820842646s)
	I0429 20:05:44.084186   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0429 20:05:44.084142   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0429 20:05:44.084202   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0429 20:05:44.084211   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:44.084065   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0: (2.820919138s)
	I0429 20:05:44.084244   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0429 20:05:44.084260   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:44.084272   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (2.82086612s)
	I0429 20:05:44.084305   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0429 20:05:44.084331   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:44.084375   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:44.091151   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0429 20:05:46.553783   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.469493694s)
	I0429 20:05:46.553882   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0429 20:05:46.553912   66218 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:46.553837   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.469479626s)
	I0429 20:05:46.553973   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:46.553975   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0429 20:05:47.510118   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0429 20:05:47.510169   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:47.510212   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:45.173157   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:45.173617   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:45.173642   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:45.173563   67552 retry.go:31] will retry after 2.784243469s: waiting for machine to come up
	I0429 20:05:47.959942   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:47.960473   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:47.960508   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:47.960410   67552 retry.go:31] will retry after 3.046526969s: waiting for machine to come up
	I0429 20:05:49.069163   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.55892426s)
	I0429 20:05:49.069202   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0429 20:05:49.069231   66218 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:49.069276   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:51.007941   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:51.008230   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:51.008253   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:51.008213   67552 retry.go:31] will retry after 4.220985004s: waiting for machine to come up
	I0429 20:05:56.579154   66875 start.go:364] duration metric: took 3m10.972135355s to acquireMachinesLock for "default-k8s-diff-port-866143"
	I0429 20:05:56.579208   66875 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:56.579230   66875 fix.go:54] fixHost starting: 
	I0429 20:05:56.579615   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:56.579655   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:56.599113   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0429 20:05:56.599627   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:56.600173   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:05:56.600198   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:56.600488   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:56.600694   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:05:56.600849   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:05:56.602291   66875 fix.go:112] recreateIfNeeded on default-k8s-diff-port-866143: state=Stopped err=<nil>
	I0429 20:05:56.602315   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	W0429 20:05:56.602456   66875 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:56.605006   66875 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-866143" ...
	I0429 20:05:53.062693   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.993382111s)
	I0429 20:05:53.062730   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0429 20:05:53.062757   66218 cache_images.go:123] Successfully loaded all cached images
	I0429 20:05:53.062762   66218 cache_images.go:92] duration metric: took 16.261337424s to LoadCachedImages
	I0429 20:05:53.062770   66218 kubeadm.go:928] updating node { 192.168.39.235 8443 v1.30.0 crio true true} ...
	I0429 20:05:53.062893   66218 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-456788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:05:53.062994   66218 ssh_runner.go:195] Run: crio config
	I0429 20:05:53.116289   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:05:53.116311   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:05:53.116322   66218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:05:53.116340   66218 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.235 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-456788 NodeName:no-preload-456788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:05:53.116516   66218 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-456788"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:05:53.116592   66218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:05:53.128095   66218 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:05:53.128174   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:05:53.138786   66218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0429 20:05:53.158151   66218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:05:53.176440   66218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:05:53.195348   66218 ssh_runner.go:195] Run: grep 192.168.39.235	control-plane.minikube.internal$ /etc/hosts
	I0429 20:05:53.199408   66218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:53.212407   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:53.349752   66218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:05:53.368381   66218 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788 for IP: 192.168.39.235
	I0429 20:05:53.368401   66218 certs.go:194] generating shared ca certs ...
	I0429 20:05:53.368415   66218 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:05:53.368565   66218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:05:53.368609   66218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:05:53.368619   66218 certs.go:256] generating profile certs ...
	I0429 20:05:53.368697   66218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.key
	I0429 20:05:53.368751   66218 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.key.5f45c78c
	I0429 20:05:53.368785   66218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.key
	I0429 20:05:53.368889   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:05:53.368915   66218 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:05:53.368921   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:05:53.368944   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:05:53.368972   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:05:53.368993   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:05:53.369029   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:53.369624   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:05:53.428403   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:05:53.467050   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:05:53.501319   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:05:53.528828   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:05:53.553742   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:05:53.582308   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:05:53.609324   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:05:53.636730   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:05:53.663388   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:05:53.690949   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:05:53.717113   66218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:05:53.735784   66218 ssh_runner.go:195] Run: openssl version
	I0429 20:05:53.741879   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:05:53.752930   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.757811   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.757861   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.763798   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:05:53.775019   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:05:53.786654   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.791457   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.791500   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.797608   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:05:53.809139   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:05:53.820927   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.826384   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.826441   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.832798   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:05:53.844300   66218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:05:53.849139   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:05:53.855556   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:05:53.861716   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:05:53.868390   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:05:53.874740   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:05:53.881101   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:05:53.887688   66218 kubeadm.go:391] StartCluster: {Name:no-preload-456788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:05:53.887807   66218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:05:53.887858   66218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:05:53.930491   66218 cri.go:89] found id: ""
	I0429 20:05:53.930563   66218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:05:53.941016   66218 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:05:53.941037   66218 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:05:53.941042   66218 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:05:53.941081   66218 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:05:53.950651   66218 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:05:53.951536   66218 kubeconfig.go:125] found "no-preload-456788" server: "https://192.168.39.235:8443"
	I0429 20:05:53.953451   66218 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:05:53.962857   66218 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.235
	I0429 20:05:53.962879   66218 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:05:53.962889   66218 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:05:53.962932   66218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:05:54.000841   66218 cri.go:89] found id: ""
	I0429 20:05:54.000909   66218 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:05:54.018221   66218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:05:54.028524   66218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:05:54.028556   66218 kubeadm.go:156] found existing configuration files:
	
	I0429 20:05:54.028600   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:05:54.038717   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:05:54.038807   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:05:54.049350   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:05:54.059483   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:05:54.059548   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:05:54.069518   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:05:54.078900   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:05:54.078953   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:05:54.088652   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:05:54.098545   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:05:54.098596   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:05:54.108351   66218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:05:54.118645   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:54.236330   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:55.859211   66218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.622843221s)
	I0429 20:05:55.859254   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.075993   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.175176   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.274249   66218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:05:56.274469   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:56.775315   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:57.274840   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:57.315656   66218 api_server.go:72] duration metric: took 1.041421989s to wait for apiserver process to appear ...
	I0429 20:05:57.315697   66218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:05:57.315719   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:05:57.316669   66218 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": dial tcp 192.168.39.235:8443: connect: connection refused
	I0429 20:05:55.230409   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230860   66615 main.go:141] libmachine: (old-k8s-version-919612) Found IP for machine: 192.168.72.240
	I0429 20:05:55.230889   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has current primary IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230898   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserving static IP address...
	I0429 20:05:55.231252   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.231287   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | skip adding static IP to network mk-old-k8s-version-919612 - found existing host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"}
	I0429 20:05:55.231305   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserved static IP address: 192.168.72.240
	I0429 20:05:55.231319   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting for SSH to be available...
	I0429 20:05:55.231335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Getting to WaitForSSH function...
	I0429 20:05:55.233198   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.233500   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH client type: external
	I0429 20:05:55.233671   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa (-rw-------)
	I0429 20:05:55.233706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:55.233730   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | About to run SSH command:
	I0429 20:05:55.233747   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | exit 0
	I0429 20:05:55.354242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:55.354584   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetConfigRaw
	I0429 20:05:55.355221   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.357791   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.358276   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358564   66615 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json ...
	I0429 20:05:55.358786   66615 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:55.358807   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:55.359037   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.361536   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.361861   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.361885   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.362048   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.362247   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362416   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362568   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.362733   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.362930   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.362943   66615 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:55.462364   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:55.462388   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462632   66615 buildroot.go:166] provisioning hostname "old-k8s-version-919612"
	I0429 20:05:55.462669   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462852   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.465335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465674   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.465706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465836   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.466034   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466208   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466366   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.466525   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.466729   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.466745   66615 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-919612 && echo "old-k8s-version-919612" | sudo tee /etc/hostname
	I0429 20:05:55.596239   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-919612
	
	I0429 20:05:55.596281   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.599221   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599575   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.599606   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599770   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.599970   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600122   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600316   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.600498   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.600667   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.600690   66615 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-919612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-919612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-919612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:55.716588   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:55.716621   66615 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:55.716647   66615 buildroot.go:174] setting up certificates
	I0429 20:05:55.716658   66615 provision.go:84] configureAuth start
	I0429 20:05:55.716671   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.716956   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.719569   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.719919   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.719956   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.720095   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.722484   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.722876   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.722912   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.723036   66615 provision.go:143] copyHostCerts
	I0429 20:05:55.723087   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:55.723097   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:55.723158   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:55.723253   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:55.723262   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:55.723280   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:55.723336   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:55.723342   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:55.723358   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:55.723404   66615 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-919612 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-919612]
	I0429 20:05:55.878639   66615 provision.go:177] copyRemoteCerts
	I0429 20:05:55.878724   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:55.878750   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.881746   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882306   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.882358   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882540   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.882743   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.882986   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.883139   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:55.973158   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:56.003094   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0429 20:05:56.031670   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:56.059049   66615 provision.go:87] duration metric: took 342.376371ms to configureAuth
	I0429 20:05:56.059091   66615 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:56.059335   66615 config.go:182] Loaded profile config "old-k8s-version-919612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 20:05:56.059441   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.062416   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.062887   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.062921   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.063082   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.063322   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063521   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063688   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.063901   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.064066   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.064082   66615 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:56.342484   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:56.342511   66615 machine.go:97] duration metric: took 983.711183ms to provisionDockerMachine
	I0429 20:05:56.342525   66615 start.go:293] postStartSetup for "old-k8s-version-919612" (driver="kvm2")
	I0429 20:05:56.342540   66615 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:56.342589   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.342931   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:56.342983   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.345399   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345710   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.345731   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345869   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.346047   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.346233   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.346418   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.431189   66615 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:56.435878   66615 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:56.435903   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:56.435983   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:56.436086   66615 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:56.436170   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:56.445841   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:56.472683   66615 start.go:296] duration metric: took 130.146591ms for postStartSetup
	I0429 20:05:56.472715   66615 fix.go:56] duration metric: took 21.31705375s for fixHost
	I0429 20:05:56.472736   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.475127   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.475492   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475624   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.475857   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476055   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476211   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.476378   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.476536   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.476547   66615 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:05:56.578999   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421156.548872445
	
	I0429 20:05:56.579028   66615 fix.go:216] guest clock: 1714421156.548872445
	I0429 20:05:56.579040   66615 fix.go:229] Guest: 2024-04-29 20:05:56.548872445 +0000 UTC Remote: 2024-04-29 20:05:56.472718546 +0000 UTC m=+226.572342220 (delta=76.153899ms)
	I0429 20:05:56.579068   66615 fix.go:200] guest clock delta is within tolerance: 76.153899ms
	I0429 20:05:56.579076   66615 start.go:83] releasing machines lock for "old-k8s-version-919612", held for 21.423436193s
	I0429 20:05:56.579111   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.579407   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:56.582338   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582673   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.582711   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582856   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583365   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583543   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583625   66615 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:56.583667   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.583765   66615 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:56.583805   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.586263   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586552   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586618   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586656   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586891   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.586953   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586989   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.587060   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587170   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.587240   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587310   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587458   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587462   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.587600   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.672678   66615 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:56.694175   66615 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:56.859009   66615 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:56.865723   66615 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:56.865798   66615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:56.885686   66615 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:56.885714   66615 start.go:494] detecting cgroup driver to use...
	I0429 20:05:56.885805   66615 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:56.909082   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:56.931583   66615 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:56.931646   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:56.953524   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:56.976170   66615 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:57.122813   66615 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:57.315725   66615 docker.go:233] disabling docker service ...
	I0429 20:05:57.315786   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:57.333927   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:57.350022   66615 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:57.525787   66615 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:57.685802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:57.703246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:57.730558   66615 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 20:05:57.730618   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.747081   66615 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:57.747133   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.760168   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.773553   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.787609   66615 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:57.800532   66615 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:57.813582   66615 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:57.813669   66615 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:57.832224   66615 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:57.844783   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:57.991666   66615 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:58.183635   66615 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:58.183718   66615 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:58.189441   66615 start.go:562] Will wait 60s for crictl version
	I0429 20:05:58.189509   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:05:58.194049   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:58.250751   66615 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:58.250839   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.292368   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.336121   66615 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0429 20:05:58.337389   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:58.340707   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341125   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:58.341153   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341387   66615 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:58.346434   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:58.361081   66615 kubeadm.go:877] updating cluster {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:58.361242   66615 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 20:05:58.361307   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:58.414304   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:05:58.414366   66615 ssh_runner.go:195] Run: which lz4
	I0429 20:05:58.420584   66615 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:05:58.425682   66615 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:05:58.425712   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0429 20:05:56.606748   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Start
	I0429 20:05:56.606929   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring networks are active...
	I0429 20:05:56.607627   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring network default is active
	I0429 20:05:56.608028   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring network mk-default-k8s-diff-port-866143 is active
	I0429 20:05:56.608557   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Getting domain xml...
	I0429 20:05:56.609325   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Creating domain...
	I0429 20:05:57.911657   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting to get IP...
	I0429 20:05:57.912705   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:57.913118   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:57.913211   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:57.913104   67743 retry.go:31] will retry after 298.590493ms: waiting for machine to come up
	I0429 20:05:58.213730   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.214424   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.214578   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:58.214487   67743 retry.go:31] will retry after 375.439886ms: waiting for machine to come up
	I0429 20:05:58.592145   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.592671   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.592700   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:58.592626   67743 retry.go:31] will retry after 432.890106ms: waiting for machine to come up
	I0429 20:05:59.027344   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.027782   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.027812   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:59.027732   67743 retry.go:31] will retry after 547.616894ms: waiting for machine to come up
	I0429 20:05:59.576555   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.577116   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.577140   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:59.577058   67743 retry.go:31] will retry after 662.088326ms: waiting for machine to come up
	I0429 20:06:00.240907   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.241712   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.241744   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:00.241667   67743 retry.go:31] will retry after 691.874394ms: waiting for machine to come up
	I0429 20:05:57.816218   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.079778   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:01.079817   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:01.079832   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.112008   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:01.112043   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:01.316358   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.322401   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:01.322437   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:01.815974   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.825156   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:01.825219   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:02.316473   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:02.328725   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:02.328763   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:02.816674   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:02.822826   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:02.822866   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:03.315863   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:03.323314   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:03.323366   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:03.816529   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:03.822521   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:03.822556   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:04.316336   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:04.325750   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0429 20:06:04.337308   66218 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:04.337348   66218 api_server.go:131] duration metric: took 7.02164287s to wait for apiserver health ...
	I0429 20:06:04.337361   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:06:04.337370   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:04.505344   66218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:00.520217   66615 crio.go:462] duration metric: took 2.099664395s to copy over tarball
	I0429 20:06:00.520314   66615 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:04.082476   66615 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562128598s)
	I0429 20:06:04.082527   66615 crio.go:469] duration metric: took 3.562271241s to extract the tarball
	I0429 20:06:04.082538   66615 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:04.129338   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:04.177683   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:06:04.177709   66615 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:06:04.177762   66615 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.177798   66615 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.177817   66615 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.177834   66615 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.177835   66615 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.177783   66615 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.177897   66615 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0429 20:06:04.177972   66615 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179282   66615 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.179360   66615 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.179361   66615 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.179320   66615 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.179331   66615 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.179299   66615 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0429 20:06:04.323997   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376145   66615 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0429 20:06:04.376210   66615 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376261   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.381592   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.420565   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0429 20:06:04.440670   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0429 20:06:04.461763   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.499283   66615 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0429 20:06:04.499347   66615 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0429 20:06:04.499404   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513860   66615 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0429 20:06:04.513900   66615 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.513946   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513988   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0429 20:06:04.548990   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.556713   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.556942   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.556965   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0429 20:06:04.566227   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.598982   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.656930   66615 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0429 20:06:04.656980   66615 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.657038   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.724922   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0429 20:06:04.725179   66615 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0429 20:06:04.725218   66615 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.725262   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732375   66615 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0429 20:06:04.732429   66615 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.732482   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732492   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.732483   66615 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0429 20:06:04.732669   66615 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.732726   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.735419   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.739785   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.742496   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.834684   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0429 20:06:04.834754   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0429 20:06:04.834811   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0429 20:06:04.847076   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0429 20:06:00.935382   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.935935   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.935979   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:00.935902   67743 retry.go:31] will retry after 1.024898519s: waiting for machine to come up
	I0429 20:06:01.962446   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:01.963109   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:01.963140   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:01.963059   67743 retry.go:31] will retry after 1.19225855s: waiting for machine to come up
	I0429 20:06:03.157257   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:03.157781   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:03.157843   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:03.157738   67743 retry.go:31] will retry after 1.699779549s: waiting for machine to come up
	I0429 20:06:04.859190   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:04.859622   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:04.859670   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:04.859565   67743 retry.go:31] will retry after 2.307475318s: waiting for machine to come up
	I0429 20:06:04.671477   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:04.684650   66218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:04.718146   66218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:04.908181   66218 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:04.908213   66218 system_pods.go:61] "coredns-7db6d8ff4d-d4kwk" [215ff4b8-3ae5-49a7-8a9f-6acb4d176b93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:04.908223   66218 system_pods.go:61] "etcd-no-preload-456788" [3ec7e177-1b68-4bff-aa4d-803f5346e1be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:04.908231   66218 system_pods.go:61] "kube-apiserver-no-preload-456788" [5e8bf0b0-9669-4f0c-8da1-523589158b16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:04.908236   66218 system_pods.go:61] "kube-controller-manager-no-preload-456788" [515363f7-bde1-4ba7-a5a9-6779f673afaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:04.908240   66218 system_pods.go:61] "kube-proxy-slnph" [29f503bf-ce19-425c-8174-2b8e7b27a424] Running
	I0429 20:06:04.908253   66218 system_pods.go:61] "kube-scheduler-no-preload-456788" [4f394af0-6452-49dd-9770-7c6bfcff3936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:04.908258   66218 system_pods.go:61] "metrics-server-569cc877fc-6mpnm" [5f183615-a243-410a-a524-ebdaa65e6400] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:04.908262   66218 system_pods.go:61] "storage-provisioner" [f74a777d-a3d7-4682-bad0-44bb993a2d43] Running
	I0429 20:06:04.908270   66218 system_pods.go:74] duration metric: took 190.098153ms to wait for pod list to return data ...
	I0429 20:06:04.908278   66218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:05.212876   66218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:05.212913   66218 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:05.212929   66218 node_conditions.go:105] duration metric: took 304.645545ms to run NodePressure ...
	I0429 20:06:05.212950   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:05.913252   66218 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:05.928914   66218 kubeadm.go:733] kubelet initialised
	I0429 20:06:05.928947   66218 kubeadm.go:734] duration metric: took 15.668535ms waiting for restarted kubelet to initialise ...
	I0429 20:06:05.928957   66218 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:05.937357   66218 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:05.091766   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:05.269730   66615 cache_images.go:92] duration metric: took 1.092006107s to LoadCachedImages
	W0429 20:06:05.269839   66615 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0429 20:06:05.269857   66615 kubeadm.go:928] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I0429 20:06:05.269988   66615 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-919612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:05.270088   66615 ssh_runner.go:195] Run: crio config
	I0429 20:06:05.322439   66615 cni.go:84] Creating CNI manager for ""
	I0429 20:06:05.322471   66615 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:05.322486   66615 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:05.322522   66615 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-919612 NodeName:old-k8s-version-919612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 20:06:05.322746   66615 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-919612"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:05.322810   66615 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 20:06:05.340981   66615 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:05.341058   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:05.357048   66615 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0429 20:06:05.384352   66615 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:05.407887   66615 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0429 20:06:05.431531   66615 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:05.437567   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:05.457652   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:05.610358   66615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:05.641538   66615 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612 for IP: 192.168.72.240
	I0429 20:06:05.641568   66615 certs.go:194] generating shared ca certs ...
	I0429 20:06:05.641583   66615 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:05.641758   66615 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:05.641831   66615 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:05.641843   66615 certs.go:256] generating profile certs ...
	I0429 20:06:05.641948   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.key
	I0429 20:06:05.642020   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key.5df5e618
	I0429 20:06:05.642083   66615 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key
	I0429 20:06:05.642256   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:05.642304   66615 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:05.642325   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:05.642364   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:05.642401   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:05.642435   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:05.642489   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:05.643156   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:05.691350   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:05.734434   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:05.773056   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:05.819778   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 20:06:05.868256   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:05.911589   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:05.957714   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 20:06:06.002120   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:06.039736   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:06.079636   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:06.118317   66615 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:06.145932   66615 ssh_runner.go:195] Run: openssl version
	I0429 20:06:06.152970   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:06.166609   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.171939   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.172033   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.179153   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:06.193491   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:06.207800   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214803   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214876   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.222154   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:06.236908   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:06.254197   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260797   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260863   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.267635   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:06.282727   66615 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:06.289580   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:06.301014   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:06.310503   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:06.318708   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:06.325718   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:06.332690   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:06.339914   66615 kubeadm.go:391] StartCluster: {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:06.340012   66615 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:06.340069   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.391511   66615 cri.go:89] found id: ""
	I0429 20:06:06.391618   66615 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:06.408955   66615 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:06.408985   66615 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:06.408991   66615 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:06.409060   66615 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:06.425276   66615 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:06.426397   66615 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-919612" does not appear in /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:06.427298   66615 kubeconfig.go:62] /home/jenkins/minikube-integration/18774-7754/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-919612" cluster setting kubeconfig missing "old-k8s-version-919612" context setting]
	I0429 20:06:06.428287   66615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:06.429908   66615 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:06.443630   66615 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.240
	I0429 20:06:06.443674   66615 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:06.443686   66615 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:06.443753   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.486251   66615 cri.go:89] found id: ""
	I0429 20:06:06.486339   66615 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:06.507136   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:06.523798   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:06.523828   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:06.523887   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:06.536668   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:06.536735   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:06.547800   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:06.560435   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:06.560517   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:06.572227   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.582772   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:06.582825   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.594168   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:06.605940   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:06.606013   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:06.621829   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:06.637520   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:06.779910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:07.921143   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.141191032s)
	I0429 20:06:07.921178   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.172381   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.276243   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.398312   66615 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:08.398424   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:08.899388   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.399344   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.898731   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:07.168679   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:07.169214   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:07.169264   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:07.169146   67743 retry.go:31] will retry after 2.050354993s: waiting for machine to come up
	I0429 20:06:09.221915   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:09.222545   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:09.222581   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:09.222449   67743 retry.go:31] will retry after 2.544889222s: waiting for machine to come up
	I0429 20:06:07.947247   66218 pod_ready.go:102] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:10.449364   66218 pod_ready.go:102] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:10.943731   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:10.943754   66218 pod_ready.go:81] duration metric: took 5.006367348s for pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:10.943763   66218 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.453825   66218 pod_ready.go:92] pod "etcd-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.453853   66218 pod_ready.go:81] duration metric: took 1.510082371s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.453865   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.462971   66218 pod_ready.go:92] pod "kube-apiserver-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.462997   66218 pod_ready.go:81] duration metric: took 9.123374ms for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.463011   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.471032   66218 pod_ready.go:92] pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.471066   66218 pod_ready.go:81] duration metric: took 8.024113ms for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.471077   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-slnph" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.478671   66218 pod_ready.go:92] pod "kube-proxy-slnph" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.478695   66218 pod_ready.go:81] duration metric: took 7.609313ms for pod "kube-proxy-slnph" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.478706   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.542851   66218 pod_ready.go:92] pod "kube-scheduler-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.542875   66218 pod_ready.go:81] duration metric: took 64.16109ms for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.542888   66218 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:10.399055   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:10.898742   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.399250   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.898511   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.399301   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.899399   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.399242   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.899417   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.398526   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.898976   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.768576   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:11.768967   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:11.769003   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:11.768924   67743 retry.go:31] will retry after 3.829285986s: waiting for machine to come up
	I0429 20:06:17.032004   65980 start.go:364] duration metric: took 56.727982697s to acquireMachinesLock for "embed-certs-161370"
	I0429 20:06:17.032074   65980 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:06:17.032085   65980 fix.go:54] fixHost starting: 
	I0429 20:06:17.032452   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:17.032485   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:17.050767   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0429 20:06:17.051181   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:17.051655   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:06:17.051680   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:17.052002   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:17.052188   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:17.052363   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:06:17.053975   65980 fix.go:112] recreateIfNeeded on embed-certs-161370: state=Stopped err=<nil>
	I0429 20:06:17.054002   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	W0429 20:06:17.054167   65980 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:06:17.056054   65980 out.go:177] * Restarting existing kvm2 VM for "embed-certs-161370" ...
	I0429 20:06:14.550615   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:17.050288   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:17.057452   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Start
	I0429 20:06:17.057630   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring networks are active...
	I0429 20:06:17.058381   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring network default is active
	I0429 20:06:17.058680   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring network mk-embed-certs-161370 is active
	I0429 20:06:17.059024   65980 main.go:141] libmachine: (embed-certs-161370) Getting domain xml...
	I0429 20:06:17.059697   65980 main.go:141] libmachine: (embed-certs-161370) Creating domain...
	I0429 20:06:15.599423   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.599897   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has current primary IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.599915   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Found IP for machine: 192.168.61.106
	I0429 20:06:15.599929   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Reserving static IP address...
	I0429 20:06:15.600318   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Reserved static IP address: 192.168.61.106
	I0429 20:06:15.600360   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-866143", mac: "52:54:00:af:de:09", ip: "192.168.61.106"} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.600375   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for SSH to be available...
	I0429 20:06:15.600405   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | skip adding static IP to network mk-default-k8s-diff-port-866143 - found existing host DHCP lease matching {name: "default-k8s-diff-port-866143", mac: "52:54:00:af:de:09", ip: "192.168.61.106"}
	I0429 20:06:15.600423   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Getting to WaitForSSH function...
	I0429 20:06:15.602983   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.603379   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.603414   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.603581   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Using SSH client type: external
	I0429 20:06:15.603611   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa (-rw-------)
	I0429 20:06:15.603675   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:06:15.603701   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | About to run SSH command:
	I0429 20:06:15.603733   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | exit 0
	I0429 20:06:15.734933   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | SSH cmd err, output: <nil>: 
	I0429 20:06:15.735306   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetConfigRaw
	I0429 20:06:15.735918   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:15.738878   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.739349   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.739385   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.739745   66875 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:06:15.739943   66875 machine.go:94] provisionDockerMachine start ...
	I0429 20:06:15.739966   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:15.740215   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.742731   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.743068   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.743097   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.743253   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.743448   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.743592   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.743729   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.743859   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.744066   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.744080   66875 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:06:15.855258   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:06:15.855292   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:15.855585   66875 buildroot.go:166] provisioning hostname "default-k8s-diff-port-866143"
	I0429 20:06:15.855604   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:15.855792   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.858278   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.858644   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.858672   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.858802   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.858996   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.859179   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.859327   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.859498   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.859667   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.859682   66875 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-866143 && echo "default-k8s-diff-port-866143" | sudo tee /etc/hostname
	I0429 20:06:15.986031   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-866143
	
	I0429 20:06:15.986094   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.989211   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.989633   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.989666   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.989858   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.990078   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.990281   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.990441   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.990591   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.990746   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.990763   66875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-866143' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-866143/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-866143' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:06:16.119358   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:06:16.119389   66875 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:06:16.119420   66875 buildroot.go:174] setting up certificates
	I0429 20:06:16.119431   66875 provision.go:84] configureAuth start
	I0429 20:06:16.119442   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:16.119741   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:16.122611   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.122991   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.123016   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.123180   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.125378   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.125673   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.125713   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.125805   66875 provision.go:143] copyHostCerts
	I0429 20:06:16.125883   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:06:16.125896   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:06:16.125963   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:06:16.126112   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:06:16.126125   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:06:16.126152   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:06:16.126234   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:06:16.126245   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:06:16.126270   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:06:16.126348   66875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-866143 san=[127.0.0.1 192.168.61.106 default-k8s-diff-port-866143 localhost minikube]
	I0429 20:06:16.280583   66875 provision.go:177] copyRemoteCerts
	I0429 20:06:16.280641   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:06:16.280665   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.283452   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.283760   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.283800   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.283999   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.284175   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.284335   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.284428   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:16.374564   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:06:16.408695   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0429 20:06:16.441975   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:06:16.470921   66875 provision.go:87] duration metric: took 351.479703ms to configureAuth
	I0429 20:06:16.470946   66875 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:06:16.471124   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:16.471205   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.473799   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.474105   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.474139   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.474291   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.474502   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.474692   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.474830   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.474995   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:16.475152   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:16.475167   66875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:06:16.774044   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:06:16.774093   66875 machine.go:97] duration metric: took 1.034135495s to provisionDockerMachine
	I0429 20:06:16.774108   66875 start.go:293] postStartSetup for "default-k8s-diff-port-866143" (driver="kvm2")
	I0429 20:06:16.774123   66875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:06:16.774148   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:16.774509   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:06:16.774539   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.777163   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.777603   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.777639   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.777779   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.777949   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.778109   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.778259   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:16.866104   66875 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:06:16.870760   66875 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:06:16.870780   66875 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:06:16.870839   66875 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:06:16.870916   66875 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:06:16.871003   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:06:16.881137   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:16.911284   66875 start.go:296] duration metric: took 137.163661ms for postStartSetup
	I0429 20:06:16.911318   66875 fix.go:56] duration metric: took 20.332102679s for fixHost
	I0429 20:06:16.911337   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.914440   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.914810   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.914838   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.915087   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.915287   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.915511   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.915692   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.915886   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:16.916034   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:16.916045   66875 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:06:17.031867   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421177.003309274
	
	I0429 20:06:17.031892   66875 fix.go:216] guest clock: 1714421177.003309274
	I0429 20:06:17.031900   66875 fix.go:229] Guest: 2024-04-29 20:06:17.003309274 +0000 UTC Remote: 2024-04-29 20:06:16.911322778 +0000 UTC m=+211.453402116 (delta=91.986496ms)
	I0429 20:06:17.031921   66875 fix.go:200] guest clock delta is within tolerance: 91.986496ms
	I0429 20:06:17.031928   66875 start.go:83] releasing machines lock for "default-k8s-diff-port-866143", held for 20.452741912s
	I0429 20:06:17.031957   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.032261   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:17.035096   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.035467   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.035497   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.035620   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036246   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036425   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036515   66875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:06:17.036569   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:17.036698   66875 ssh_runner.go:195] Run: cat /version.json
	I0429 20:06:17.036726   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:17.039300   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039595   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039813   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.039848   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039907   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:17.039984   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.040017   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.040069   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:17.040172   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:17.040230   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:17.040329   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:17.040382   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:17.040483   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:17.040636   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:17.137510   66875 ssh_runner.go:195] Run: systemctl --version
	I0429 20:06:17.160834   66875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:06:17.320792   66875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:06:17.328367   66875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:06:17.328448   66875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:06:17.349698   66875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:06:17.349724   66875 start.go:494] detecting cgroup driver to use...
	I0429 20:06:17.349807   66875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:06:17.372156   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:06:17.388142   66875 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:06:17.388206   66875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:06:17.406108   66875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:06:17.422323   66875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:06:17.555079   66875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:06:17.727126   66875 docker.go:233] disabling docker service ...
	I0429 20:06:17.727194   66875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:06:17.743136   66875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:06:17.757045   66875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:06:17.885705   66875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:06:18.021993   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:06:18.039020   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:06:18.063267   66875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:06:18.063330   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.076473   66875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:06:18.076545   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.089566   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.102912   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.116940   66875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:06:18.130940   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.150505   66875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.177724   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.191088   66875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:06:18.203560   66875 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:06:18.203635   66875 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:06:18.221087   66875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:06:18.233719   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:18.383406   66875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:06:18.543941   66875 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:06:18.544029   66875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:06:18.550828   66875 start.go:562] Will wait 60s for crictl version
	I0429 20:06:18.550891   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:06:18.556158   66875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:06:18.607004   66875 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:06:18.607083   66875 ssh_runner.go:195] Run: crio --version
	I0429 20:06:18.638282   66875 ssh_runner.go:195] Run: crio --version
	I0429 20:06:18.674135   66875 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:06:15.399474   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:15.899352   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.899106   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.899205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.399351   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.899319   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.399303   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.898824   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.675590   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:18.678673   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:18.679055   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:18.679096   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:18.679272   66875 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0429 20:06:18.685110   66875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:18.705804   66875 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:06:18.705967   66875 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:06:18.706036   66875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:18.750754   66875 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:06:18.750823   66875 ssh_runner.go:195] Run: which lz4
	I0429 20:06:18.755893   66875 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:06:18.760892   66875 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:06:18.760921   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:06:19.055680   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:21.552080   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:18.301855   65980 main.go:141] libmachine: (embed-certs-161370) Waiting to get IP...
	I0429 20:06:18.302804   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.303231   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.303273   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.303198   67921 retry.go:31] will retry after 279.123731ms: waiting for machine to come up
	I0429 20:06:18.584013   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.584661   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.584703   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.584630   67921 retry.go:31] will retry after 239.910483ms: waiting for machine to come up
	I0429 20:06:18.825978   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.826393   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.826425   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.826349   67921 retry.go:31] will retry after 312.324444ms: waiting for machine to come up
	I0429 20:06:19.139999   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:19.140583   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:19.140611   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:19.140535   67921 retry.go:31] will retry after 498.525047ms: waiting for machine to come up
	I0429 20:06:19.640244   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:19.640797   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:19.640828   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:19.640756   67921 retry.go:31] will retry after 479.301061ms: waiting for machine to come up
	I0429 20:06:20.121396   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:20.121982   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:20.122015   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:20.121941   67921 retry.go:31] will retry after 706.389673ms: waiting for machine to come up
	I0429 20:06:20.829691   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:20.830191   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:20.830247   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:20.830166   67921 retry.go:31] will retry after 1.145397308s: waiting for machine to come up
	I0429 20:06:21.977290   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:21.977747   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:21.977779   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:21.977691   67921 retry.go:31] will retry after 955.977029ms: waiting for machine to come up
	I0429 20:06:20.399233   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.898571   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.398855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.898885   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.399328   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.398965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.899248   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.398833   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.899039   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.561047   66875 crio.go:462] duration metric: took 1.805186908s to copy over tarball
	I0429 20:06:20.561137   66875 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:23.264543   66875 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.703371921s)
	I0429 20:06:23.264573   66875 crio.go:469] duration metric: took 2.7034954s to extract the tarball
	I0429 20:06:23.264581   66875 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:23.303558   66875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:23.356825   66875 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 20:06:23.356854   66875 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:06:23.356873   66875 kubeadm.go:928] updating node { 192.168.61.106 8444 v1.30.0 crio true true} ...
	I0429 20:06:23.357007   66875 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-866143 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:23.357105   66875 ssh_runner.go:195] Run: crio config
	I0429 20:06:23.414195   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:06:23.414225   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:23.414237   66875 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:23.414267   66875 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.106 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-866143 NodeName:default-k8s-diff-port-866143 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:06:23.414459   66875 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.106
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-866143"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:23.414524   66875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:06:23.425977   66875 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:23.426089   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:23.437270   66875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0429 20:06:23.457613   66875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:23.479383   66875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0429 20:06:23.509517   66875 ssh_runner.go:195] Run: grep 192.168.61.106	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:23.514202   66875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:23.528721   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:23.666941   66875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:23.687710   66875 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143 for IP: 192.168.61.106
	I0429 20:06:23.687745   66875 certs.go:194] generating shared ca certs ...
	I0429 20:06:23.687768   66875 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:23.687952   66875 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:23.688005   66875 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:23.688020   66875 certs.go:256] generating profile certs ...
	I0429 20:06:23.688168   66875 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.key
	I0429 20:06:23.688260   66875 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.key.5d7fbd4b
	I0429 20:06:23.688318   66875 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.key
	I0429 20:06:23.688481   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:23.688532   66875 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:23.688548   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:23.688592   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:23.688628   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:23.688663   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:23.688722   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:23.689611   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:23.743834   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:23.783115   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:23.819086   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:23.850794   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0429 20:06:23.882477   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:23.918607   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:23.947837   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:06:23.977241   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:24.005902   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:24.034910   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:24.064119   66875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:24.083879   66875 ssh_runner.go:195] Run: openssl version
	I0429 20:06:24.090651   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:24.104929   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.110955   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.111034   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.117914   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:24.131076   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:24.144790   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.150842   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.150926   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.157842   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:24.171737   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:24.186164   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.191924   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.191995   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.199385   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:24.213392   66875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:24.219369   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:24.226784   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:24.234655   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:24.242406   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:24.249904   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:24.257400   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:24.264165   66875 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:24.264290   66875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:24.264353   66875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:24.310126   66875 cri.go:89] found id: ""
	I0429 20:06:24.310197   66875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:24.322134   66875 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:24.322155   66875 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:24.322160   66875 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:24.322223   66875 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:24.337713   66875 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:24.339184   66875 kubeconfig.go:125] found "default-k8s-diff-port-866143" server: "https://192.168.61.106:8444"
	I0429 20:06:24.342237   66875 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:24.353500   66875 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.106
	I0429 20:06:24.353545   66875 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:24.353560   66875 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:24.353627   66875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:24.399835   66875 cri.go:89] found id: ""
	I0429 20:06:24.399918   66875 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:24.426456   66875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:24.440261   66875 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:24.440282   66875 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:24.440376   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0429 20:06:24.450699   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:24.450766   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:24.462870   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0429 20:06:24.474894   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:24.474961   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:24.488607   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0429 20:06:24.499626   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:24.499685   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:24.514156   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0429 20:06:24.525958   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:24.526018   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:24.537063   66875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:24.548503   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:24.687916   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:24.051367   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:26.550970   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:22.935362   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:22.935797   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:22.935827   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:22.935746   67921 retry.go:31] will retry after 1.25494649s: waiting for machine to come up
	I0429 20:06:24.192017   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:24.192613   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:24.192641   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:24.192556   67921 retry.go:31] will retry after 1.641885834s: waiting for machine to come up
	I0429 20:06:25.836686   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:25.837170   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:25.837193   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:25.837125   67921 retry.go:31] will retry after 2.794216099s: waiting for machine to come up
	I0429 20:06:25.398515   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:25.898944   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.399360   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.899294   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.399520   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.899434   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.398734   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.898479   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.399413   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.899236   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.234143   66875 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.546180467s)
	I0429 20:06:26.234181   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.502030   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.577778   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.689836   66875 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:26.689982   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.190231   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.690207   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.729434   66875 api_server.go:72] duration metric: took 1.039599386s to wait for apiserver process to appear ...
	I0429 20:06:27.729473   66875 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:06:27.729497   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:27.730016   66875 api_server.go:269] stopped: https://192.168.61.106:8444/healthz: Get "https://192.168.61.106:8444/healthz": dial tcp 192.168.61.106:8444: connect: connection refused
	I0429 20:06:28.230353   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:28.551049   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:31.051387   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:31.411151   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:31.411188   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:31.411205   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:31.424074   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:31.424106   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:31.729916   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:31.737269   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:31.737299   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:32.229834   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:32.237900   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:32.237935   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:32.730529   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:32.735043   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 200:
	ok
	I0429 20:06:32.743999   66875 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:32.744026   66875 api_server.go:131] duration metric: took 5.014546615s to wait for apiserver health ...
	I0429 20:06:32.744035   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:06:32.744041   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:32.745889   66875 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:28.633451   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:28.633950   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:28.633979   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:28.633906   67921 retry.go:31] will retry after 2.251092878s: waiting for machine to come up
	I0429 20:06:30.887722   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:30.888251   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:30.888283   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:30.888208   67921 retry.go:31] will retry after 2.941721217s: waiting for machine to come up
	I0429 20:06:32.747198   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:32.760578   66875 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:32.786719   66875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:32.797795   66875 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:32.797830   66875 system_pods.go:61] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:32.797838   66875 system_pods.go:61] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:32.797844   66875 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:32.797854   66875 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:32.797859   66875 system_pods.go:61] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 20:06:32.797866   66875 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:32.797873   66875 system_pods.go:61] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:32.797880   66875 system_pods.go:61] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 20:06:32.797888   66875 system_pods.go:74] duration metric: took 11.14839ms to wait for pod list to return data ...
	I0429 20:06:32.797902   66875 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:32.801888   66875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:32.801909   66875 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:32.801918   66875 node_conditions.go:105] duration metric: took 4.010782ms to run NodePressure ...
	I0429 20:06:32.801934   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:33.088679   66875 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:33.094165   66875 kubeadm.go:733] kubelet initialised
	I0429 20:06:33.094185   66875 kubeadm.go:734] duration metric: took 5.479589ms waiting for restarted kubelet to initialise ...
	I0429 20:06:33.094192   66875 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:33.101524   66875 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.106879   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.106911   66875 pod_ready.go:81] duration metric: took 5.352162ms for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.106923   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.106946   66875 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.111446   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.111469   66875 pod_ready.go:81] duration metric: took 4.507858ms for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.111478   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.111483   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.115613   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.115643   66875 pod_ready.go:81] duration metric: took 4.152743ms for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.115654   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.115663   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.191660   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.191695   66875 pod_ready.go:81] duration metric: took 76.012388ms for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.191707   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.191713   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.592489   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-proxy-zddtx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.592522   66875 pod_ready.go:81] duration metric: took 400.801861ms for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.592535   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-proxy-zddtx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.592544   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.990624   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.990655   66875 pod_ready.go:81] duration metric: took 398.101779ms for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.990667   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.990673   66875 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:34.391120   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:34.391148   66875 pod_ready.go:81] duration metric: took 400.467456ms for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:34.391165   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:34.391173   66875 pod_ready.go:38] duration metric: took 1.296972775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:34.391191   66875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:06:34.408817   66875 ops.go:34] apiserver oom_adj: -16
	I0429 20:06:34.408845   66875 kubeadm.go:591] duration metric: took 10.086677852s to restartPrimaryControlPlane
	I0429 20:06:34.408856   66875 kubeadm.go:393] duration metric: took 10.144698168s to StartCluster
	I0429 20:06:34.408876   66875 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:34.408961   66875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:34.411093   66875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:34.411379   66875 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:06:34.413055   66875 out.go:177] * Verifying Kubernetes components...
	I0429 20:06:34.411518   66875 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:06:34.411607   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:34.414229   66875 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414239   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:34.414261   66875 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-866143"
	I0429 20:06:34.414238   66875 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414232   66875 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414341   66875 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.414355   66875 addons.go:243] addon metrics-server should already be in state true
	I0429 20:06:34.414382   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.414381   66875 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.414396   66875 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:06:34.414439   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.414650   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414677   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.414746   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414758   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.414890   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414923   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.433279   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I0429 20:06:34.433827   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.434444   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.434474   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.434873   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.435436   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.435483   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.435739   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
	I0429 20:06:34.435746   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0429 20:06:34.436117   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.436245   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.436638   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.436678   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.436734   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.436747   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.437011   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.437057   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.437218   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.437601   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.437630   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.441092   66875 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.441118   66875 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:06:34.441146   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.441550   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.441582   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.451571   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0429 20:06:34.452041   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.452627   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.452650   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.453080   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.453401   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.455145   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0429 20:06:34.455335   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.457339   66875 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:34.455992   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.456826   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I0429 20:06:34.458912   66875 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:06:34.458925   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:06:34.458942   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.459155   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.459818   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.459836   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.460050   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.460068   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.460196   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.460406   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.460450   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.461005   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.461051   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.462529   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.462624   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.464140   66875 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:06:30.398730   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:30.898542   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.399309   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.898751   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.399374   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.899262   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.398723   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.899281   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.399356   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.899305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.463014   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.463255   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.465585   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.465598   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:06:34.465623   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:06:34.465652   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.465703   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.465892   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.466043   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.468951   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.469342   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.469407   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.469645   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.469817   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.469984   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.470137   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.484411   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0429 20:06:34.484864   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.485366   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.485396   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.485759   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.485937   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.487715   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.487962   66875 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:06:34.487975   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:06:34.487989   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.490407   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.490724   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.490748   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.490890   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.491045   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.491146   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.491274   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.618088   66875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:34.638582   66875 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-866143" to be "Ready" ...
	I0429 20:06:34.729046   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:06:34.729633   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:06:34.729649   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:06:34.752200   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:06:34.770107   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:06:34.770143   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:06:34.847081   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:06:34.847117   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:06:34.889992   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:06:35.821090   66875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091986938s)
	I0429 20:06:35.821127   66875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.068905753s)
	I0429 20:06:35.821145   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821150   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821157   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821162   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821490   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821505   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821514   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821524   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821528   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821534   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821549   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821540   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821902   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821923   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821936   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.822007   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.822024   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.828303   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.828348   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.828591   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.828606   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.828632   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.843540   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.843566   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.843860   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.843877   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.843886   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.843894   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.844127   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.844170   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.844188   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.844203   66875 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-866143"
	I0429 20:06:35.846214   66875 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0429 20:06:33.549917   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:35.550564   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:33.831181   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:33.831552   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:33.831581   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:33.831506   67921 retry.go:31] will retry after 5.040485428s: waiting for machine to come up
	I0429 20:06:35.399419   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.899244   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.398934   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.898847   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.399273   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.899102   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.398748   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.399524   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.898813   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.847674   66875 addons.go:505] duration metric: took 1.436173952s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0429 20:06:36.641963   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:38.642738   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:38.873188   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.873625   65980 main.go:141] libmachine: (embed-certs-161370) Found IP for machine: 192.168.50.184
	I0429 20:06:38.873653   65980 main.go:141] libmachine: (embed-certs-161370) Reserving static IP address...
	I0429 20:06:38.873669   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has current primary IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.874037   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "embed-certs-161370", mac: "52:54:00:e6:05:1f", ip: "192.168.50.184"} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:38.874091   65980 main.go:141] libmachine: (embed-certs-161370) Reserved static IP address: 192.168.50.184
	I0429 20:06:38.874113   65980 main.go:141] libmachine: (embed-certs-161370) DBG | skip adding static IP to network mk-embed-certs-161370 - found existing host DHCP lease matching {name: "embed-certs-161370", mac: "52:54:00:e6:05:1f", ip: "192.168.50.184"}
	I0429 20:06:38.874132   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Getting to WaitForSSH function...
	I0429 20:06:38.874151   65980 main.go:141] libmachine: (embed-certs-161370) Waiting for SSH to be available...
	I0429 20:06:38.875891   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.876205   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:38.876237   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.876401   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Using SSH client type: external
	I0429 20:06:38.876425   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa (-rw-------)
	I0429 20:06:38.876455   65980 main.go:141] libmachine: (embed-certs-161370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:06:38.876475   65980 main.go:141] libmachine: (embed-certs-161370) DBG | About to run SSH command:
	I0429 20:06:38.876486   65980 main.go:141] libmachine: (embed-certs-161370) DBG | exit 0
	I0429 20:06:39.006684   65980 main.go:141] libmachine: (embed-certs-161370) DBG | SSH cmd err, output: <nil>: 
	I0429 20:06:39.007072   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetConfigRaw
	I0429 20:06:39.007701   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:39.010189   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.010539   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.010577   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.010783   65980 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/config.json ...
	I0429 20:06:39.010970   65980 machine.go:94] provisionDockerMachine start ...
	I0429 20:06:39.010986   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:39.011196   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.013422   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.013832   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.013862   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.013986   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.014183   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.014377   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.014528   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.014710   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.014868   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.014878   65980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:06:39.119151   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:06:39.119183   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.119425   65980 buildroot.go:166] provisioning hostname "embed-certs-161370"
	I0429 20:06:39.119449   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.119606   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.122418   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.122725   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.122755   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.122894   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.123087   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.123235   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.123371   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.123547   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.123719   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.123734   65980 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-161370 && echo "embed-certs-161370" | sudo tee /etc/hostname
	I0429 20:06:39.247323   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-161370
	
	I0429 20:06:39.247360   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.250202   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.250594   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.250623   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.250761   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.250956   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.251158   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.251354   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.251536   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.251724   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.251746   65980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-161370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-161370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-161370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:06:39.370366   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:06:39.370395   65980 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:06:39.370415   65980 buildroot.go:174] setting up certificates
	I0429 20:06:39.370429   65980 provision.go:84] configureAuth start
	I0429 20:06:39.370441   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.370754   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:39.373600   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.373977   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.374011   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.374305   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.376654   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.376999   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.377032   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.377156   65980 provision.go:143] copyHostCerts
	I0429 20:06:39.377217   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:06:39.377228   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:06:39.377279   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:06:39.377367   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:06:39.377375   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:06:39.377393   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:06:39.377446   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:06:39.377453   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:06:39.377470   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:06:39.377523   65980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.embed-certs-161370 san=[127.0.0.1 192.168.50.184 embed-certs-161370 localhost minikube]
	I0429 20:06:39.441865   65980 provision.go:177] copyRemoteCerts
	I0429 20:06:39.441931   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:06:39.441954   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.445189   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.445633   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.445677   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.445918   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.446166   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.446364   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.446521   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:39.535703   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:06:39.571033   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0429 20:06:39.604181   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:06:39.639250   65980 provision.go:87] duration metric: took 268.808275ms to configureAuth
	I0429 20:06:39.639339   65980 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:06:39.639575   65980 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:39.639668   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.642544   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.642975   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.643006   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.643146   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.643348   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.643507   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.643671   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.643838   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.644011   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.644039   65980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:06:39.974134   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:06:39.974168   65980 machine.go:97] duration metric: took 963.184467ms to provisionDockerMachine
	I0429 20:06:39.974186   65980 start.go:293] postStartSetup for "embed-certs-161370" (driver="kvm2")
	I0429 20:06:39.974201   65980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:06:39.974229   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:39.974601   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:06:39.974636   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.977843   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.978295   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.978328   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.978528   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.978768   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.978939   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.979144   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.066379   65980 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:06:40.071720   65980 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:06:40.071742   65980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:06:40.071798   65980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:06:40.071875   65980 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:06:40.071965   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:06:40.082556   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:40.112774   65980 start.go:296] duration metric: took 138.571139ms for postStartSetup
	I0429 20:06:40.112827   65980 fix.go:56] duration metric: took 23.080734046s for fixHost
	I0429 20:06:40.112859   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.115931   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.116414   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.116448   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.116643   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.116859   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.117026   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.117169   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.117358   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:40.117560   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:40.117576   65980 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:06:40.223697   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421200.206855033
	
	I0429 20:06:40.223722   65980 fix.go:216] guest clock: 1714421200.206855033
	I0429 20:06:40.223732   65980 fix.go:229] Guest: 2024-04-29 20:06:40.206855033 +0000 UTC Remote: 2024-04-29 20:06:40.112832003 +0000 UTC m=+362.399028562 (delta=94.02303ms)
	I0429 20:06:40.223777   65980 fix.go:200] guest clock delta is within tolerance: 94.02303ms
	I0429 20:06:40.223782   65980 start.go:83] releasing machines lock for "embed-certs-161370", held for 23.191744513s
	I0429 20:06:40.223804   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.224106   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:40.226904   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.227299   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.227328   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.227462   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.227955   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.228117   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.228199   65980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:06:40.228238   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.228353   65980 ssh_runner.go:195] Run: cat /version.json
	I0429 20:06:40.228378   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.230943   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231151   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231370   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.231401   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231585   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.231595   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.231629   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231794   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.231806   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.231982   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.232000   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.232182   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.232197   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.232303   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.337533   65980 ssh_runner.go:195] Run: systemctl --version
	I0429 20:06:40.347252   65980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:06:40.494668   65980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:06:40.502707   65980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:06:40.502788   65980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:06:40.522261   65980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:06:40.522298   65980 start.go:494] detecting cgroup driver to use...
	I0429 20:06:40.522368   65980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:06:40.540576   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:06:40.557130   65980 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:06:40.557203   65980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:06:40.573803   65980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:06:40.589730   65980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:06:40.731625   65980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:06:40.902594   65980 docker.go:233] disabling docker service ...
	I0429 20:06:40.902665   65980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:06:40.921454   65980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:06:40.938734   65980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:06:41.081822   65980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:06:41.237778   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:06:41.254086   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:06:41.276277   65980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:06:41.276362   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.288903   65980 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:06:41.288972   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.301347   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.313639   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.325885   65980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:06:41.338215   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.350839   65980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.372124   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.385505   65980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:06:41.397626   65980 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:06:41.397704   65980 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:06:41.413915   65980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:06:41.427068   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:41.575690   65980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:06:41.748047   65980 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:06:41.748132   65980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:06:41.753313   65980 start.go:562] Will wait 60s for crictl version
	I0429 20:06:41.753379   65980 ssh_runner.go:195] Run: which crictl
	I0429 20:06:41.757672   65980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:06:41.794045   65980 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:06:41.794150   65980 ssh_runner.go:195] Run: crio --version
	I0429 20:06:41.831177   65980 ssh_runner.go:195] Run: crio --version
	I0429 20:06:41.865125   65980 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:06:38.049006   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:40.050003   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:42.050213   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:41.866698   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:41.869477   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:41.869815   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:41.869848   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:41.870107   65980 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0429 20:06:41.874917   65980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:41.889196   65980 kubeadm.go:877] updating cluster {Name:embed-certs-161370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:06:41.889353   65980 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:06:41.889423   65980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:41.936285   65980 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:06:41.936352   65980 ssh_runner.go:195] Run: which lz4
	I0429 20:06:41.941893   65980 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:06:41.947071   65980 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:06:41.947112   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:06:40.399024   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:40.899056   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.399275   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.899285   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.399200   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.899243   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.398590   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.899346   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.143962   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:41.645981   66875 node_ready.go:49] node "default-k8s-diff-port-866143" has status "Ready":"True"
	I0429 20:06:41.646007   66875 node_ready.go:38] duration metric: took 7.007388661s for node "default-k8s-diff-port-866143" to be "Ready" ...
	I0429 20:06:41.646018   66875 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:41.652664   66875 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.657667   66875 pod_ready.go:92] pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.657685   66875 pod_ready.go:81] duration metric: took 4.993051ms for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.657694   66875 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.662632   66875 pod_ready.go:92] pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.662650   66875 pod_ready.go:81] duration metric: took 4.950519ms for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.662658   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.667488   66875 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.667509   66875 pod_ready.go:81] duration metric: took 4.844299ms for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.667520   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.672480   66875 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.672501   66875 pod_ready.go:81] duration metric: took 4.974639ms for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.672512   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:42.042828   66875 pod_ready.go:92] pod "kube-proxy-zddtx" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:42.042856   66875 pod_ready.go:81] duration metric: took 370.336555ms for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:42.042868   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.051930   66875 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:44.548970   66875 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:44.548999   66875 pod_ready.go:81] duration metric: took 2.506120519s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.549011   66875 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.051077   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:46.052233   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:43.759688   65980 crio.go:462] duration metric: took 1.817838869s to copy over tarball
	I0429 20:06:43.759784   65980 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:46.405802   65980 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.64598022s)
	I0429 20:06:46.405851   65980 crio.go:469] duration metric: took 2.646122331s to extract the tarball
	I0429 20:06:46.405861   65980 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:46.444700   65980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:46.503047   65980 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 20:06:46.503086   65980 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:06:46.503098   65980 kubeadm.go:928] updating node { 192.168.50.184 8443 v1.30.0 crio true true} ...
	I0429 20:06:46.503234   65980 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-161370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:46.503334   65980 ssh_runner.go:195] Run: crio config
	I0429 20:06:46.563489   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:06:46.563511   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:46.563523   65980 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:46.563542   65980 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-161370 NodeName:embed-certs-161370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:06:46.563662   65980 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-161370"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:46.563719   65980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:06:46.576288   65980 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:46.576350   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:46.586807   65980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0429 20:06:46.605883   65980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:46.626741   65980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0429 20:06:46.647223   65980 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:46.652262   65980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:46.667095   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:46.804937   65980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:46.831022   65980 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370 for IP: 192.168.50.184
	I0429 20:06:46.831048   65980 certs.go:194] generating shared ca certs ...
	I0429 20:06:46.831067   65980 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:46.831251   65980 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:46.831295   65980 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:46.831306   65980 certs.go:256] generating profile certs ...
	I0429 20:06:46.831385   65980 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/client.key
	I0429 20:06:46.831440   65980 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.key.9384fac7
	I0429 20:06:46.831476   65980 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.key
	I0429 20:06:46.831582   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:46.831610   65980 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:46.831617   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:46.831635   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:46.831662   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:46.831691   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:46.831729   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:46.832571   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:46.896363   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:46.939336   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:46.976256   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:47.007777   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0429 20:06:47.045019   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:47.079584   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:47.114002   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:06:47.142163   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:47.170063   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:47.199098   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:47.228985   65980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:47.250928   65980 ssh_runner.go:195] Run: openssl version
	I0429 20:06:47.258215   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:47.271653   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.277100   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.277183   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.283876   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:47.297519   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:47.311104   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.316347   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.316408   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.322992   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:47.337744   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:47.351332   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.356912   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.356964   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.363339   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:47.378501   65980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:47.383995   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:47.391157   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:47.398039   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:47.405117   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:47.412125   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:47.419257   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:47.425917   65980 kubeadm.go:391] StartCluster: {Name:embed-certs-161370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:47.426009   65980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:47.426049   65980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:47.469133   65980 cri.go:89] found id: ""
	I0429 20:06:47.469216   65980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:47.481852   65980 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:47.481878   65980 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:47.481883   65980 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:47.481926   65980 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:47.495254   65980 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:47.496760   65980 kubeconfig.go:125] found "embed-certs-161370" server: "https://192.168.50.184:8443"
	I0429 20:06:47.499898   65980 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:47.511866   65980 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.184
	I0429 20:06:47.511903   65980 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:47.511917   65980 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:47.511972   65980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:47.563879   65980 cri.go:89] found id: ""
	I0429 20:06:47.563956   65980 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:47.583490   65980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:47.595867   65980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:47.595893   65980 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:47.595947   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:47.608218   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:47.608283   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:47.620329   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:47.631394   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:47.631527   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:47.643107   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:47.654164   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:47.654233   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:47.665890   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:47.676817   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:47.676859   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:47.688608   65980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:47.700068   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:45.398908   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:45.898619   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.398795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.899058   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.399257   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.899269   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.398874   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.898653   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.399305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.898855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.556692   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:49.056546   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:48.550949   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:50.551905   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:47.821391   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.623284   65980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.31791052s)
	I0429 20:06:49.623343   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.870630   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.950525   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:50.061240   65980 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:50.061331   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.562165   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.062299   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.139853   65980 api_server.go:72] duration metric: took 1.078602354s to wait for apiserver process to appear ...
	I0429 20:06:51.139883   65980 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:06:51.139905   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:51.140472   65980 api_server.go:269] stopped: https://192.168.50.184:8443/healthz: Get "https://192.168.50.184:8443/healthz": dial tcp 192.168.50.184:8443: connect: connection refused
	I0429 20:06:51.640813   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:50.398577   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.899284   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.399361   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.899134   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.399211   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.898733   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.399280   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.898915   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.399264   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.898840   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.057650   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:53.559429   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:53.049570   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:55.049866   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:57.050558   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:54.540707   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:54.540765   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:54.540797   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:54.618982   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:54.619016   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:54.640333   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:54.674491   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:54.674542   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:55.140955   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:55.157479   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:55.157517   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:55.639999   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:55.646278   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:55.646311   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:56.140938   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:56.147336   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:56.147371   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:56.640927   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:56.647027   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:56.647054   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:57.140894   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:57.145193   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:57.145236   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:57.640842   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:57.645453   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:57.645478   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:58.140524   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:58.146317   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0429 20:06:58.153972   65980 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:58.154011   65980 api_server.go:131] duration metric: took 7.014120443s to wait for apiserver health ...
	I0429 20:06:58.154028   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:06:58.154036   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:58.155341   65980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:55.398622   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:55.898563   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.399306   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.898473   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.899278   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.399121   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.899291   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.399197   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.898901   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.056503   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:58.056988   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:59.053737   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:01.555480   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:58.156794   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:58.176870   65980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:58.215333   65980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:58.230619   65980 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:58.230655   65980 system_pods.go:61] "coredns-7db6d8ff4d-wjfff" [bd92e456-b538-49ae-984b-c6bcea6add30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:58.230667   65980 system_pods.go:61] "etcd-embed-certs-161370" [da2d022f-33c4-49b7-b997-a6783043f3e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:58.230675   65980 system_pods.go:61] "kube-apiserver-embed-certs-161370" [032913c9-bb91-46ba-ad4d-a4d5b63d806f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:58.230681   65980 system_pods.go:61] "kube-controller-manager-embed-certs-161370" [2f3ae1ff-0688-4c70-a888-5e1e640f64bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:58.230685   65980 system_pods.go:61] "kube-proxy-9kmg8" [01bbd2ca-24d2-4c7c-b4ea-79604ac3f486] Running
	I0429 20:06:58.230689   65980 system_pods.go:61] "kube-scheduler-embed-certs-161370" [c88ab7cc-1aef-48cb-814e-eff8e946885c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:58.230694   65980 system_pods.go:61] "metrics-server-569cc877fc-c4h7f" [bf1cae8d-cca1-4573-935f-e60118ca9575] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:58.230698   65980 system_pods.go:61] "storage-provisioner" [1686a084-f28b-4b26-b748-85a2a3613dde] Running
	I0429 20:06:58.230703   65980 system_pods.go:74] duration metric: took 15.348727ms to wait for pod list to return data ...
	I0429 20:06:58.230713   65980 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:58.233411   65980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:58.233436   65980 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:58.233447   65980 node_conditions.go:105] duration metric: took 2.729694ms to run NodePressure ...
	I0429 20:06:58.233466   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:58.532729   65980 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:58.538018   65980 kubeadm.go:733] kubelet initialised
	I0429 20:06:58.538038   65980 kubeadm.go:734] duration metric: took 5.283028ms waiting for restarted kubelet to initialise ...
	I0429 20:06:58.538046   65980 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:58.544267   65980 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:00.553501   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:00.398537   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.899359   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.399125   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.899428   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.399457   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.899355   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.399421   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.899376   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.399331   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.899263   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.555517   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:02.557429   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:05.056216   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:04.049941   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:06.051285   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:03.069330   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:03.554710   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.554732   65980 pod_ready.go:81] duration metric: took 5.010440873s for pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.554742   65980 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.562277   65980 pod_ready.go:92] pod "etcd-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.562298   65980 pod_ready.go:81] duration metric: took 7.550156ms for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.562306   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.567038   65980 pod_ready.go:92] pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.567060   65980 pod_ready.go:81] duration metric: took 4.748002ms for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.567069   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.573632   65980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.573664   65980 pod_ready.go:81] duration metric: took 1.006574407s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.573675   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9kmg8" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.578356   65980 pod_ready.go:92] pod "kube-proxy-9kmg8" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.578377   65980 pod_ready.go:81] duration metric: took 4.694437ms for pod "kube-proxy-9kmg8" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.578388   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.749703   65980 pod_ready.go:92] pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.749733   65980 pod_ready.go:81] duration metric: took 171.336391ms for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.749750   65980 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:06.757041   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:05.398458   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:05.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.399205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.399308   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.898749   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:08.399182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:08.399271   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:08.448015   66615 cri.go:89] found id: ""
	I0429 20:07:08.448041   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.448049   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:08.448055   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:08.448103   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:08.491239   66615 cri.go:89] found id: ""
	I0429 20:07:08.491265   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.491274   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:08.491280   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:08.491330   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:08.541203   66615 cri.go:89] found id: ""
	I0429 20:07:08.541226   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.541234   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:08.541239   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:08.541300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:08.584370   66615 cri.go:89] found id: ""
	I0429 20:07:08.584393   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.584401   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:08.584407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:08.584469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:08.625126   66615 cri.go:89] found id: ""
	I0429 20:07:08.625158   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.625169   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:08.625182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:08.625246   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:08.666987   66615 cri.go:89] found id: ""
	I0429 20:07:08.667018   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.667032   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:08.667039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:08.667105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:08.712363   66615 cri.go:89] found id: ""
	I0429 20:07:08.712394   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.712405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:08.712413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:08.712471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:08.762122   66615 cri.go:89] found id: ""
	I0429 20:07:08.762151   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.762170   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:08.762180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:08.762196   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:08.808218   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:08.808246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:08.867278   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:08.867317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:08.884230   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:08.884266   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:09.018183   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:09.018208   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:09.018224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:07.555443   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:09.557653   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:08.551823   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.051232   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:09.257687   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.758829   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.587112   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:11.603711   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:11.603781   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:11.651087   66615 cri.go:89] found id: ""
	I0429 20:07:11.651115   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.651123   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:11.651128   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:11.651192   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:11.691888   66615 cri.go:89] found id: ""
	I0429 20:07:11.691914   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.691921   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:11.691928   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:11.691976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:11.733411   66615 cri.go:89] found id: ""
	I0429 20:07:11.733441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.733452   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:11.733460   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:11.733517   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:11.774620   66615 cri.go:89] found id: ""
	I0429 20:07:11.774648   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.774659   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:11.774666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:11.774729   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:11.821410   66615 cri.go:89] found id: ""
	I0429 20:07:11.821441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.821449   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:11.821455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:11.821502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:11.864699   66615 cri.go:89] found id: ""
	I0429 20:07:11.864730   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.864741   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:11.864749   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:11.864809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:11.904637   66615 cri.go:89] found id: ""
	I0429 20:07:11.904678   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.904687   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:11.904693   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:11.904742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:11.970914   66615 cri.go:89] found id: ""
	I0429 20:07:11.970945   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.970957   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:11.970968   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:11.970984   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:12.024185   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:12.024226   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:12.040319   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:12.040349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:12.137888   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:12.137915   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:12.137941   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:12.210256   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:12.210290   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:14.758756   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:14.775321   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:14.775386   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:14.812637   66615 cri.go:89] found id: ""
	I0429 20:07:14.812662   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.812672   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:14.812679   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:14.812735   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:14.851503   66615 cri.go:89] found id: ""
	I0429 20:07:14.851536   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.851547   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:14.851554   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:14.851613   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:14.885708   66615 cri.go:89] found id: ""
	I0429 20:07:14.885739   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.885749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:14.885756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:14.885817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:14.926133   66615 cri.go:89] found id: ""
	I0429 20:07:14.926162   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.926173   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:14.926181   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:14.926240   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:12.056093   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.056500   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:13.549924   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:15.550544   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.257394   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:16.756833   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.967553   66615 cri.go:89] found id: ""
	I0429 20:07:14.967582   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.967593   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:14.967601   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:14.967659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:15.006174   66615 cri.go:89] found id: ""
	I0429 20:07:15.006199   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.006207   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:15.006218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:15.006293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:15.046916   66615 cri.go:89] found id: ""
	I0429 20:07:15.046940   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.046947   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:15.046953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:15.047009   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:15.089229   66615 cri.go:89] found id: ""
	I0429 20:07:15.089256   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.089266   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:15.089278   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:15.089298   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:15.143518   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:15.143561   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:15.162742   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:15.162769   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:15.242850   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:15.242872   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:15.242884   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:15.315783   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:15.315825   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:17.863336   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:17.877802   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:17.877869   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:17.935714   66615 cri.go:89] found id: ""
	I0429 20:07:17.935738   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.935746   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:17.935754   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:17.935810   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:17.988496   66615 cri.go:89] found id: ""
	I0429 20:07:17.988529   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.988540   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:17.988547   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:17.988610   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:18.030695   66615 cri.go:89] found id: ""
	I0429 20:07:18.030726   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.030737   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:18.030745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:18.030822   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:18.077452   66615 cri.go:89] found id: ""
	I0429 20:07:18.077481   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.077491   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:18.077498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:18.077561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:18.120102   66615 cri.go:89] found id: ""
	I0429 20:07:18.120127   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.120136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:18.120141   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:18.120200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:18.163440   66615 cri.go:89] found id: ""
	I0429 20:07:18.163469   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.163480   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:18.163487   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:18.163549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:18.202650   66615 cri.go:89] found id: ""
	I0429 20:07:18.202680   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.202693   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:18.202699   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:18.202760   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:18.244378   66615 cri.go:89] found id: ""
	I0429 20:07:18.244408   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.244418   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:18.244429   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:18.244446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:18.289246   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:18.289279   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:18.343382   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:18.343425   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:18.359070   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:18.359103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:18.440316   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:18.440337   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:18.440351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:16.555711   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.555851   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.051297   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:20.551594   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.756941   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:20.756974   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:22.757155   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:21.019552   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:21.036407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:21.036523   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:21.083148   66615 cri.go:89] found id: ""
	I0429 20:07:21.083171   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.083179   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:21.083184   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:21.083231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:21.129382   66615 cri.go:89] found id: ""
	I0429 20:07:21.129415   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.129426   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:21.129434   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:21.129502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:21.172978   66615 cri.go:89] found id: ""
	I0429 20:07:21.173007   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.173015   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:21.173020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:21.173068   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:21.218124   66615 cri.go:89] found id: ""
	I0429 20:07:21.218159   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.218171   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:21.218178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:21.218243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:21.260603   66615 cri.go:89] found id: ""
	I0429 20:07:21.260640   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.260651   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:21.260658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:21.260723   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:21.302351   66615 cri.go:89] found id: ""
	I0429 20:07:21.302386   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.302398   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:21.302407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:21.302498   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:21.347003   66615 cri.go:89] found id: ""
	I0429 20:07:21.347028   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.347037   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:21.347043   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:21.347098   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:21.388202   66615 cri.go:89] found id: ""
	I0429 20:07:21.388236   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.388245   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:21.388257   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:21.388272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:21.442706   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:21.442744   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:21.457453   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:21.457489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:21.539669   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:21.539695   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:21.539707   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:21.625210   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:21.625247   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.173256   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:24.189920   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:24.189990   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:24.236730   66615 cri.go:89] found id: ""
	I0429 20:07:24.236761   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.236772   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:24.236779   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:24.236843   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:24.279031   66615 cri.go:89] found id: ""
	I0429 20:07:24.279055   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.279062   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:24.279067   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:24.279112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:24.321622   66615 cri.go:89] found id: ""
	I0429 20:07:24.321647   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.321657   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:24.321665   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:24.321726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:24.360884   66615 cri.go:89] found id: ""
	I0429 20:07:24.360911   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.360919   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:24.360924   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:24.360983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:24.414439   66615 cri.go:89] found id: ""
	I0429 20:07:24.414463   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.414472   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:24.414477   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:24.414559   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:24.456994   66615 cri.go:89] found id: ""
	I0429 20:07:24.457023   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.457033   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:24.457041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:24.457107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:24.497991   66615 cri.go:89] found id: ""
	I0429 20:07:24.498026   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.498036   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:24.498044   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:24.498137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:24.539375   66615 cri.go:89] found id: ""
	I0429 20:07:24.539415   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.539426   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:24.539438   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:24.539453   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:24.661778   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:24.661804   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:24.661820   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:24.748180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:24.748215   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.795963   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:24.795999   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:24.851485   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:24.851524   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:20.556543   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:22.556775   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:24.559759   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:23.052715   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:25.550857   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.551209   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:25.256195   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.258199   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.367869   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:27.385633   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:27.385716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:27.423181   66615 cri.go:89] found id: ""
	I0429 20:07:27.423210   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.423222   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:27.423233   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:27.423293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:27.467385   66615 cri.go:89] found id: ""
	I0429 20:07:27.467419   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.467432   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:27.467439   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:27.467503   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:27.506171   66615 cri.go:89] found id: ""
	I0429 20:07:27.506204   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.506216   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:27.506223   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:27.506272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:27.545043   66615 cri.go:89] found id: ""
	I0429 20:07:27.545066   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.545074   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:27.545080   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:27.545136   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:27.592279   66615 cri.go:89] found id: ""
	I0429 20:07:27.592306   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.592314   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:27.592320   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:27.592379   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:27.628569   66615 cri.go:89] found id: ""
	I0429 20:07:27.628595   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.628604   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:27.628612   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:27.628659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:27.667937   66615 cri.go:89] found id: ""
	I0429 20:07:27.667967   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.667978   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:27.667985   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:27.668047   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:27.708813   66615 cri.go:89] found id: ""
	I0429 20:07:27.708844   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.708853   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:27.708861   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:27.708876   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:27.789589   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:27.789625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:27.837147   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:27.837180   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:27.891928   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:27.891956   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:27.906162   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:27.906188   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:27.983738   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:27.057372   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:29.555872   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:30.049373   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:32.052279   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:29.758675   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:32.257486   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:30.484404   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:30.503968   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:30.504041   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:30.553070   66615 cri.go:89] found id: ""
	I0429 20:07:30.553099   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.553111   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:30.553118   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:30.553180   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:30.609226   66615 cri.go:89] found id: ""
	I0429 20:07:30.609253   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.609262   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:30.609267   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:30.609324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:30.658359   66615 cri.go:89] found id: ""
	I0429 20:07:30.658384   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.658395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:30.658401   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:30.658459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:30.710024   66615 cri.go:89] found id: ""
	I0429 20:07:30.710048   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.710058   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:30.710114   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:30.710173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:30.752361   66615 cri.go:89] found id: ""
	I0429 20:07:30.752388   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.752398   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:30.752405   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:30.752469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:30.793311   66615 cri.go:89] found id: ""
	I0429 20:07:30.793333   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.793341   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:30.793347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:30.793394   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:30.832371   66615 cri.go:89] found id: ""
	I0429 20:07:30.832400   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.832411   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:30.832417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:30.832469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:30.871183   66615 cri.go:89] found id: ""
	I0429 20:07:30.871215   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.871226   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:30.871237   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:30.871253   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:30.929909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:30.929947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:30.944454   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:30.944482   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:31.022060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:31.022100   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:31.022116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:31.104142   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:31.104185   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:33.651167   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:33.667888   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:33.667948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:33.708455   66615 cri.go:89] found id: ""
	I0429 20:07:33.708484   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.708495   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:33.708502   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:33.708561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:33.747578   66615 cri.go:89] found id: ""
	I0429 20:07:33.747602   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.747611   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:33.747616   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:33.747661   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:33.796005   66615 cri.go:89] found id: ""
	I0429 20:07:33.796036   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.796056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:33.796064   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:33.796128   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:33.836238   66615 cri.go:89] found id: ""
	I0429 20:07:33.836263   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.836271   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:33.836276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:33.836324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:33.877010   66615 cri.go:89] found id: ""
	I0429 20:07:33.877043   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.877056   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:33.877065   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:33.877137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:33.919690   66615 cri.go:89] found id: ""
	I0429 20:07:33.919714   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.919722   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:33.919727   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:33.919797   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:33.959857   66615 cri.go:89] found id: ""
	I0429 20:07:33.959889   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.959900   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:33.959907   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:33.959989   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:33.996349   66615 cri.go:89] found id: ""
	I0429 20:07:33.996376   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.996386   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:33.996396   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:33.996433   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:34.010773   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:34.010808   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:34.091581   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:34.091599   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:34.091611   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:34.173266   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:34.173299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:34.221447   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:34.221479   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:32.055352   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.056364   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.550100   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.550663   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.756264   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.756583   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.776486   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:36.791630   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:36.791764   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:36.837475   66615 cri.go:89] found id: ""
	I0429 20:07:36.837503   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.837513   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:36.837521   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:36.837607   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.879902   66615 cri.go:89] found id: ""
	I0429 20:07:36.879936   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.879947   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:36.879954   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:36.880021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:36.918566   66615 cri.go:89] found id: ""
	I0429 20:07:36.918594   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.918608   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:36.918613   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:36.918659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:36.958876   66615 cri.go:89] found id: ""
	I0429 20:07:36.958937   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.958948   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:36.958959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:36.959008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:36.998790   66615 cri.go:89] found id: ""
	I0429 20:07:36.998820   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.998845   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:36.998864   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:36.998932   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:37.036933   66615 cri.go:89] found id: ""
	I0429 20:07:37.036962   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.036972   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:37.036979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:37.037024   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:37.076560   66615 cri.go:89] found id: ""
	I0429 20:07:37.076597   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.076609   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:37.076616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:37.076688   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:37.118324   66615 cri.go:89] found id: ""
	I0429 20:07:37.118351   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.118360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:37.118368   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:37.118380   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:37.194671   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:37.194714   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:37.236269   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:37.236300   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:37.297006   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:37.297061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:37.312696   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:37.312723   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:37.387132   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:39.888111   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:39.903157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:39.903236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:39.945913   66615 cri.go:89] found id: ""
	I0429 20:07:39.945945   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.945956   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:39.945980   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:39.946076   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.056553   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:38.057230   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:39.050274   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:41.053502   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:38.756717   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:40.762297   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:39.986494   66615 cri.go:89] found id: ""
	I0429 20:07:39.986521   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.986530   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:39.986538   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:39.986598   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:40.031481   66615 cri.go:89] found id: ""
	I0429 20:07:40.031520   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.031531   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:40.031539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:40.031604   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:40.076792   66615 cri.go:89] found id: ""
	I0429 20:07:40.076816   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.076824   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:40.076830   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:40.076877   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:40.121020   66615 cri.go:89] found id: ""
	I0429 20:07:40.121050   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.121061   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:40.121068   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:40.121134   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:40.173189   66615 cri.go:89] found id: ""
	I0429 20:07:40.173221   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.173233   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:40.173241   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:40.173303   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:40.220190   66615 cri.go:89] found id: ""
	I0429 20:07:40.220212   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.220223   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:40.220229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:40.220293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:40.262552   66615 cri.go:89] found id: ""
	I0429 20:07:40.262579   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.262588   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:40.262600   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:40.262616   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:40.322249   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:40.322289   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:40.338703   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:40.338734   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:40.431311   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:40.431333   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:40.431345   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:40.518410   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:40.518446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:43.062556   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:43.077757   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:43.077844   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:43.129247   66615 cri.go:89] found id: ""
	I0429 20:07:43.129277   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.129289   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:43.129296   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:43.129364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:43.173474   66615 cri.go:89] found id: ""
	I0429 20:07:43.173501   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.173509   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:43.173514   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:43.173566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:43.218788   66615 cri.go:89] found id: ""
	I0429 20:07:43.218812   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.218820   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:43.218825   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:43.218873   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:43.259269   66615 cri.go:89] found id: ""
	I0429 20:07:43.259289   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.259297   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:43.259302   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:43.259362   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:43.301152   66615 cri.go:89] found id: ""
	I0429 20:07:43.301180   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.301189   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:43.301195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:43.301244   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:43.338183   66615 cri.go:89] found id: ""
	I0429 20:07:43.338211   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.338222   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:43.338229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:43.338276   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:43.376919   66615 cri.go:89] found id: ""
	I0429 20:07:43.376946   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.376958   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:43.376966   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:43.377032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:43.417421   66615 cri.go:89] found id: ""
	I0429 20:07:43.417450   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.417457   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:43.417465   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:43.417478   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:43.470009   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:43.470040   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:43.486059   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:43.486109   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:43.561688   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:43.561709   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:43.561725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:43.649713   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:43.649750   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:40.555780   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.056758   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.552176   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:46.049393   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.256870   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:45.258520   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:47.757738   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:46.194996   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:46.210261   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:46.210342   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:46.249208   66615 cri.go:89] found id: ""
	I0429 20:07:46.249240   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.249253   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:46.249260   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:46.249336   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:46.287285   66615 cri.go:89] found id: ""
	I0429 20:07:46.287315   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.287328   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:46.287335   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:46.287397   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:46.327944   66615 cri.go:89] found id: ""
	I0429 20:07:46.327976   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.327988   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:46.327996   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:46.328061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:46.373875   66615 cri.go:89] found id: ""
	I0429 20:07:46.373899   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.373908   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:46.373914   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:46.373967   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:46.413748   66615 cri.go:89] found id: ""
	I0429 20:07:46.413774   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.413783   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:46.413789   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:46.413853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:46.459380   66615 cri.go:89] found id: ""
	I0429 20:07:46.459412   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.459424   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:46.459432   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:46.459496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:46.499833   66615 cri.go:89] found id: ""
	I0429 20:07:46.499861   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.499870   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:46.499876   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:46.499939   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:46.541025   66615 cri.go:89] found id: ""
	I0429 20:07:46.541055   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.541068   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:46.541080   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:46.541096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:46.601187   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:46.601224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:46.617399   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:46.617426   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:46.697076   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:46.697113   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:46.697129   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:46.783265   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:46.783303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.335795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:49.350030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:49.350116   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:49.390278   66615 cri.go:89] found id: ""
	I0429 20:07:49.390315   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.390326   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:49.390333   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:49.390388   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:49.431145   66615 cri.go:89] found id: ""
	I0429 20:07:49.431175   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.431186   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:49.431193   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:49.431252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:49.473965   66615 cri.go:89] found id: ""
	I0429 20:07:49.473997   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.474014   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:49.474022   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:49.474105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:49.515372   66615 cri.go:89] found id: ""
	I0429 20:07:49.515407   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.515419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:49.515427   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:49.515487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:49.552541   66615 cri.go:89] found id: ""
	I0429 20:07:49.552567   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.552576   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:49.552582   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:49.552650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:49.599628   66615 cri.go:89] found id: ""
	I0429 20:07:49.599660   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.599672   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:49.599680   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:49.599745   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:49.642705   66615 cri.go:89] found id: ""
	I0429 20:07:49.642741   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.642752   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:49.642759   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:49.642827   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:49.679864   66615 cri.go:89] found id: ""
	I0429 20:07:49.679888   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.679896   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:49.679905   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:49.679919   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:49.765967   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:49.765986   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:49.766010   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:49.852739   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:49.852779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.905586   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:49.905613   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:45.559781   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:48.059952   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:48.049788   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:50.548836   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.551059   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:50.256898   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.757213   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:49.959443   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:49.959474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.476677   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:52.491378   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:52.491458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:52.535801   66615 cri.go:89] found id: ""
	I0429 20:07:52.535827   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.535835   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:52.535841   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:52.535901   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:52.582895   66615 cri.go:89] found id: ""
	I0429 20:07:52.582932   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.582944   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:52.582952   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:52.583022   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:52.627070   66615 cri.go:89] found id: ""
	I0429 20:07:52.627096   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.627113   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:52.627120   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:52.627181   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:52.673312   66615 cri.go:89] found id: ""
	I0429 20:07:52.673339   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.673348   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:52.673353   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:52.673399   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:52.713099   66615 cri.go:89] found id: ""
	I0429 20:07:52.713124   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.713131   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:52.713139   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:52.713205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:52.761982   66615 cri.go:89] found id: ""
	I0429 20:07:52.762007   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.762017   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:52.762024   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:52.762108   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:52.801019   66615 cri.go:89] found id: ""
	I0429 20:07:52.801048   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.801059   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:52.801067   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:52.801141   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:52.842544   66615 cri.go:89] found id: ""
	I0429 20:07:52.842578   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.842602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:52.842613   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:52.842630   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:52.896409   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:52.896442   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.912625   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:52.912650   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:52.992231   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:52.992260   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:52.992276   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:53.077473   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:53.077507   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:50.555818   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.556860   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:54.557161   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:54.554094   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:57.049699   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:55.257406   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:57.257840   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:55.625557   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:55.640211   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:55.640284   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:55.683215   66615 cri.go:89] found id: ""
	I0429 20:07:55.683250   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.683259   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:55.683275   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:55.683341   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:55.730820   66615 cri.go:89] found id: ""
	I0429 20:07:55.730851   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.730862   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:55.730869   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:55.730928   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:55.771784   66615 cri.go:89] found id: ""
	I0429 20:07:55.771808   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.771816   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:55.771821   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:55.771866   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:55.814988   66615 cri.go:89] found id: ""
	I0429 20:07:55.815021   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.815034   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:55.815042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:55.815114   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:55.859293   66615 cri.go:89] found id: ""
	I0429 20:07:55.859327   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.859340   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:55.859349   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:55.859416   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:55.901802   66615 cri.go:89] found id: ""
	I0429 20:07:55.901833   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.901844   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:55.901852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:55.901921   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:55.943863   66615 cri.go:89] found id: ""
	I0429 20:07:55.943895   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.943905   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:55.943913   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:55.943977   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:55.986256   66615 cri.go:89] found id: ""
	I0429 20:07:55.986284   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.986296   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:55.986314   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:55.986332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:56.036710   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:56.036742   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:56.099909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:56.099945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:56.117630   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:56.117660   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:56.197396   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.197421   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:56.197436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:58.779065   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:58.794086   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:58.794168   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:58.844035   66615 cri.go:89] found id: ""
	I0429 20:07:58.844062   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.844070   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:58.844076   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:58.844133   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:58.887859   66615 cri.go:89] found id: ""
	I0429 20:07:58.887889   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.887900   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:58.887906   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:58.887991   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:58.929039   66615 cri.go:89] found id: ""
	I0429 20:07:58.929072   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.929083   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:58.929092   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:58.929152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:58.965930   66615 cri.go:89] found id: ""
	I0429 20:07:58.965975   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.965983   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:58.965989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:58.966061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:59.005583   66615 cri.go:89] found id: ""
	I0429 20:07:59.005616   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.005628   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:59.005638   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:59.005697   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:59.047964   66615 cri.go:89] found id: ""
	I0429 20:07:59.047994   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.048007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:59.048014   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:59.048077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:59.091851   66615 cri.go:89] found id: ""
	I0429 20:07:59.091891   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.091904   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:59.091909   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:59.091978   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:59.134843   66615 cri.go:89] found id: ""
	I0429 20:07:59.134874   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.134881   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:59.134890   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:59.134907   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:59.219048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:59.219084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:59.267404   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:59.267436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:59.322264   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:59.322303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:59.339196   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:59.339235   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:59.441904   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.558660   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.057214   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.054473   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.550825   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.756683   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.759031   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.942998   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:01.957442   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:01.957502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:02.002240   66615 cri.go:89] found id: ""
	I0429 20:08:02.002271   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.002283   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:02.002291   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:02.002353   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:02.051506   66615 cri.go:89] found id: ""
	I0429 20:08:02.051535   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.051546   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:02.051552   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:02.051611   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:02.093194   66615 cri.go:89] found id: ""
	I0429 20:08:02.093234   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.093247   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:02.093254   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:02.093317   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:02.134988   66615 cri.go:89] found id: ""
	I0429 20:08:02.135016   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.135027   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:02.135034   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:02.135099   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:02.182954   66615 cri.go:89] found id: ""
	I0429 20:08:02.182982   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.182993   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:02.183000   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:02.183063   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:02.227778   66615 cri.go:89] found id: ""
	I0429 20:08:02.227807   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.227817   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:02.227826   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:02.227888   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:02.265593   66615 cri.go:89] found id: ""
	I0429 20:08:02.265624   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.265634   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:02.265641   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:02.265701   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:02.306520   66615 cri.go:89] found id: ""
	I0429 20:08:02.306550   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.306558   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:02.306566   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:02.306578   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:02.323806   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:02.323844   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:02.407110   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:02.407140   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:02.407153   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:02.493755   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:02.493791   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:02.538610   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:02.538640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:01.556084   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:03.556487   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:03.551788   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:05.553047   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:04.257831   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:06.756438   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:05.096630   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:05.111112   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:05.111173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:05.151237   66615 cri.go:89] found id: ""
	I0429 20:08:05.151268   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.151279   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:05.151286   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:05.151370   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:05.205344   66615 cri.go:89] found id: ""
	I0429 20:08:05.205379   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.205389   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:05.205396   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:05.205478   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:05.244394   66615 cri.go:89] found id: ""
	I0429 20:08:05.244426   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.244438   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:05.244445   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:05.244504   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:05.285320   66615 cri.go:89] found id: ""
	I0429 20:08:05.285343   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.285350   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:05.285356   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:05.285404   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:05.327618   66615 cri.go:89] found id: ""
	I0429 20:08:05.327645   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.327657   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:05.327664   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:05.327742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:05.369152   66615 cri.go:89] found id: ""
	I0429 20:08:05.369178   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.369194   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:05.369208   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:05.369277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:05.407206   66615 cri.go:89] found id: ""
	I0429 20:08:05.407234   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.407243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:05.407248   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:05.407299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:05.447404   66615 cri.go:89] found id: ""
	I0429 20:08:05.447438   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.447449   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:05.447459   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:05.447475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:05.529660   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:05.529700   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.582510   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:05.582565   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:05.639300   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:05.639351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:05.656825   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:05.656860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:05.730863   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.231635   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:08.247722   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:08.247811   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:08.298354   66615 cri.go:89] found id: ""
	I0429 20:08:08.298382   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.298395   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:08.298401   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:08.298459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:08.339497   66615 cri.go:89] found id: ""
	I0429 20:08:08.339536   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.339549   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:08.339556   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:08.339609   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:08.379665   66615 cri.go:89] found id: ""
	I0429 20:08:08.379695   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.379705   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:08.379712   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:08.379786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:08.419698   66615 cri.go:89] found id: ""
	I0429 20:08:08.419722   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.419732   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:08.419739   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:08.419798   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:08.463901   66615 cri.go:89] found id: ""
	I0429 20:08:08.463935   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.463946   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:08.463953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:08.464028   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:08.504568   66615 cri.go:89] found id: ""
	I0429 20:08:08.504603   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.504617   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:08.504626   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:08.504695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:08.545634   66615 cri.go:89] found id: ""
	I0429 20:08:08.545661   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.545671   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:08.545678   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:08.545741   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:08.586936   66615 cri.go:89] found id: ""
	I0429 20:08:08.586965   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.586976   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:08.586987   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:08.587003   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:08.641755   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:08.641794   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:08.659798   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:08.659845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:08.744265   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.744288   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:08.744303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:08.823813   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:08.823860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.557172   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:07.558538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:10.057841   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:08.049902   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:10.050576   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:12.051331   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:08.757300   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:11.257697   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:11.375600   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:11.396286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:11.396351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:11.442737   66615 cri.go:89] found id: ""
	I0429 20:08:11.442781   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.442789   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:11.442797   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:11.442865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:11.484131   66615 cri.go:89] found id: ""
	I0429 20:08:11.484158   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.484167   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:11.484172   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:11.484231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:11.526647   66615 cri.go:89] found id: ""
	I0429 20:08:11.526684   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.526695   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:11.526705   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:11.526777   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:11.572001   66615 cri.go:89] found id: ""
	I0429 20:08:11.572028   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.572036   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:11.572042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:11.572100   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:11.618980   66615 cri.go:89] found id: ""
	I0429 20:08:11.619003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.619011   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:11.619016   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:11.619077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:11.667079   66615 cri.go:89] found id: ""
	I0429 20:08:11.667107   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.667115   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:11.667123   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:11.667198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:11.707967   66615 cri.go:89] found id: ""
	I0429 20:08:11.708003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.708013   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:11.708020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:11.708073   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:11.753024   66615 cri.go:89] found id: ""
	I0429 20:08:11.753053   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.753062   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:11.753070   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:11.753081   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:11.820171   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:11.820210   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:11.852234   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:11.852263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:11.971060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:11.971085   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:11.971097   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:12.049797   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:12.049845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:14.601181   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:14.621413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:14.621496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:14.677453   66615 cri.go:89] found id: ""
	I0429 20:08:14.677486   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.677498   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:14.677504   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:14.677562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:14.720517   66615 cri.go:89] found id: ""
	I0429 20:08:14.720548   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.720560   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:14.720571   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:14.720636   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:14.770186   66615 cri.go:89] found id: ""
	I0429 20:08:14.770211   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.770219   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:14.770225   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:14.770301   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:14.815286   66615 cri.go:89] found id: ""
	I0429 20:08:14.815310   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.815320   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:14.815327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:14.815389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:14.862625   66615 cri.go:89] found id: ""
	I0429 20:08:14.862651   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.862662   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:14.862669   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:14.862726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:14.910517   66615 cri.go:89] found id: ""
	I0429 20:08:14.910554   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.910565   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:14.910572   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:14.910634   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:14.951085   66615 cri.go:89] found id: ""
	I0429 20:08:14.951110   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.951119   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:14.951124   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:14.951173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:12.558191   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:15.056987   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:14.051423   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:16.051632   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:13.757001   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:16.257425   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:14.991414   66615 cri.go:89] found id: ""
	I0429 20:08:14.991443   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.991455   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:14.991464   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:14.991476   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:15.047551   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:15.047583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:15.063667   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:15.063692   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:15.141744   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:15.141820   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:15.141841   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:15.225676   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:15.225722   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:17.774459   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:17.793137   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:17.793210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:17.856725   66615 cri.go:89] found id: ""
	I0429 20:08:17.856756   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.856767   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:17.856774   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:17.856835   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:17.916510   66615 cri.go:89] found id: ""
	I0429 20:08:17.916542   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.916554   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:17.916561   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:17.916646   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:17.970835   66615 cri.go:89] found id: ""
	I0429 20:08:17.970867   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.970877   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:17.970884   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:17.970948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:18.013324   66615 cri.go:89] found id: ""
	I0429 20:08:18.013353   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.013366   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:18.013384   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:18.013458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:18.062930   66615 cri.go:89] found id: ""
	I0429 20:08:18.062957   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.062968   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:18.062974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:18.063040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:18.111792   66615 cri.go:89] found id: ""
	I0429 20:08:18.111820   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.111829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:18.111834   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:18.111911   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:18.160096   66615 cri.go:89] found id: ""
	I0429 20:08:18.160121   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.160129   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:18.160135   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:18.160198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:18.204012   66615 cri.go:89] found id: ""
	I0429 20:08:18.204044   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.204052   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:18.204062   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:18.204074   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:18.284288   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:18.284337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:18.340746   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:18.340779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:18.397612   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:18.397652   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:18.413425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:18.413455   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:18.493598   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:17.058215   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:19.556308   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:18.551175   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:20.551292   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:22.551637   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:18.757370   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:21.259192   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:20.994339   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:21.010199   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:21.010289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:21.052190   66615 cri.go:89] found id: ""
	I0429 20:08:21.052219   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.052230   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:21.052237   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:21.052300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:21.090838   66615 cri.go:89] found id: ""
	I0429 20:08:21.090870   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.090882   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:21.090889   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:21.090953   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:21.137997   66615 cri.go:89] found id: ""
	I0429 20:08:21.138044   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.138056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:21.138082   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:21.138171   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:21.176278   66615 cri.go:89] found id: ""
	I0429 20:08:21.176311   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.176323   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:21.176331   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:21.176390   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:21.213925   66615 cri.go:89] found id: ""
	I0429 20:08:21.213955   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.213966   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:21.213973   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:21.214039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:21.253815   66615 cri.go:89] found id: ""
	I0429 20:08:21.253842   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.253850   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:21.253857   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:21.253905   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:21.296521   66615 cri.go:89] found id: ""
	I0429 20:08:21.296553   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.296565   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:21.296573   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:21.296633   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:21.337114   66615 cri.go:89] found id: ""
	I0429 20:08:21.337143   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.337150   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:21.337158   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:21.337177   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.384860   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:21.384901   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:21.443837   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:21.443899   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:21.460084   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:21.460116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:21.541230   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:21.541262   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:21.541278   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.132057   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:24.148381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:24.148458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:24.192469   66615 cri.go:89] found id: ""
	I0429 20:08:24.192499   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.192510   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:24.192516   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:24.192568   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:24.232150   66615 cri.go:89] found id: ""
	I0429 20:08:24.232177   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.232188   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:24.232195   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:24.232260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:24.272679   66615 cri.go:89] found id: ""
	I0429 20:08:24.272705   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.272714   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:24.272719   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:24.272772   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:24.317114   66615 cri.go:89] found id: ""
	I0429 20:08:24.317137   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.317145   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:24.317151   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:24.317200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:24.362251   66615 cri.go:89] found id: ""
	I0429 20:08:24.362279   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.362287   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:24.362294   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:24.362346   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:24.405696   66615 cri.go:89] found id: ""
	I0429 20:08:24.405721   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.405729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:24.405734   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:24.405828   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:24.446837   66615 cri.go:89] found id: ""
	I0429 20:08:24.446864   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.446871   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:24.446878   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:24.446929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:24.493416   66615 cri.go:89] found id: ""
	I0429 20:08:24.493445   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.493454   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:24.493462   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:24.493475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:24.555657   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:24.555693   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:24.572297   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:24.572328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:24.658463   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:24.658487   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:24.658499   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.752064   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:24.752103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.557948   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:24.056339   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:25.050530   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:27.554744   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:23.758156   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:26.261403   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:27.303812   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:27.319304   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:27.319373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:27.360473   66615 cri.go:89] found id: ""
	I0429 20:08:27.360509   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.360521   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:27.360529   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:27.360595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:27.404619   66615 cri.go:89] found id: ""
	I0429 20:08:27.404651   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.404668   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:27.404675   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:27.404742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:27.447464   66615 cri.go:89] found id: ""
	I0429 20:08:27.447490   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.447498   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:27.447503   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:27.447556   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:27.489197   66615 cri.go:89] found id: ""
	I0429 20:08:27.489235   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.489246   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:27.489253   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:27.489323   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:27.534354   66615 cri.go:89] found id: ""
	I0429 20:08:27.534387   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.534397   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:27.534404   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:27.534470   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:27.580721   66615 cri.go:89] found id: ""
	I0429 20:08:27.580751   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.580762   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:27.580769   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:27.580841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:27.620000   66615 cri.go:89] found id: ""
	I0429 20:08:27.620033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.620041   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:27.620046   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:27.620096   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:27.659000   66615 cri.go:89] found id: ""
	I0429 20:08:27.659033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.659041   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:27.659050   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:27.659062   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:27.739202   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:27.739241   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:27.784761   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:27.784807   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:27.842707   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:27.842748   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:27.859471   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:27.859498   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:27.942686   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:26.058098   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:28.059648   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.056692   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:32.550893   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:28.757412   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.759070   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.443410   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:30.460332   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:30.460417   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:30.497715   66615 cri.go:89] found id: ""
	I0429 20:08:30.497752   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.497764   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:30.497772   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:30.497841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:30.539376   66615 cri.go:89] found id: ""
	I0429 20:08:30.539409   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.539419   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:30.539426   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:30.539492   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:30.587567   66615 cri.go:89] found id: ""
	I0429 20:08:30.587596   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.587606   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:30.587616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:30.587679   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:30.626198   66615 cri.go:89] found id: ""
	I0429 20:08:30.626228   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.626238   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:30.626246   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:30.626313   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:30.665798   66615 cri.go:89] found id: ""
	I0429 20:08:30.665829   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.665837   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:30.665843   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:30.665909   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:30.708627   66615 cri.go:89] found id: ""
	I0429 20:08:30.708659   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.708671   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:30.708679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:30.708762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:30.754190   66615 cri.go:89] found id: ""
	I0429 20:08:30.754220   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.754230   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:30.754236   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:30.754295   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:30.797383   66615 cri.go:89] found id: ""
	I0429 20:08:30.797410   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.797421   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:30.797432   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:30.797447   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.843485   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:30.843512   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:30.900081   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:30.900118   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:30.916095   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:30.916125   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:30.995509   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:30.995529   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:30.995541   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:33.584596   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:33.600969   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:33.601058   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:33.643935   66615 cri.go:89] found id: ""
	I0429 20:08:33.643967   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.643979   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:33.643986   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:33.644049   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:33.681047   66615 cri.go:89] found id: ""
	I0429 20:08:33.681077   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.681085   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:33.681091   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:33.681160   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:33.726450   66615 cri.go:89] found id: ""
	I0429 20:08:33.726479   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.726490   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:33.726501   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:33.726561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:33.765237   66615 cri.go:89] found id: ""
	I0429 20:08:33.765264   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.765275   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:33.765281   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:33.765339   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:33.808333   66615 cri.go:89] found id: ""
	I0429 20:08:33.808366   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.808376   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:33.808383   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:33.808446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:33.854991   66615 cri.go:89] found id: ""
	I0429 20:08:33.855023   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.855034   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:33.855041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:33.855126   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:33.895405   66615 cri.go:89] found id: ""
	I0429 20:08:33.895434   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.895446   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:33.895455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:33.895521   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:33.937265   66615 cri.go:89] found id: ""
	I0429 20:08:33.937289   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.937297   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:33.937306   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:33.937324   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:33.991565   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:33.991594   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:34.006316   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:34.006343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:34.088734   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:34.088762   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:34.088776   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:34.180451   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:34.180489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.557020   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:33.058354   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:35.049638   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:37.051464   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:33.256955   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:35.257122   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:37.257629   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:36.727080   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:36.743038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:36.743124   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:36.785441   66615 cri.go:89] found id: ""
	I0429 20:08:36.785465   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.785475   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:36.785482   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:36.785542   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:36.828787   66615 cri.go:89] found id: ""
	I0429 20:08:36.828819   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.828829   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:36.828836   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:36.828896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:36.867712   66615 cri.go:89] found id: ""
	I0429 20:08:36.867738   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.867749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:36.867756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:36.867825   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:36.911435   66615 cri.go:89] found id: ""
	I0429 20:08:36.911462   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.911472   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:36.911478   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:36.911560   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:36.953803   66615 cri.go:89] found id: ""
	I0429 20:08:36.953828   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.953836   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:36.953842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:36.953903   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:36.990305   66615 cri.go:89] found id: ""
	I0429 20:08:36.990329   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.990339   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:36.990347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:36.990434   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:37.029177   66615 cri.go:89] found id: ""
	I0429 20:08:37.029206   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.029225   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:37.029232   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:37.029294   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:37.067583   66615 cri.go:89] found id: ""
	I0429 20:08:37.067605   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.067612   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:37.067619   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:37.067631   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:37.144739   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:37.144776   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:37.144788   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:37.227724   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:37.227762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:37.270383   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:37.270417   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:37.326858   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:37.326890   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:39.843323   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:39.859899   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:39.859961   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:39.903125   66615 cri.go:89] found id: ""
	I0429 20:08:39.903155   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.903164   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:39.903169   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:39.903243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:39.944271   66615 cri.go:89] found id: ""
	I0429 20:08:39.944300   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.944309   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:39.944314   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:39.944363   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:35.557115   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:38.056175   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.550339   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:42.048622   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.756355   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:42.255528   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.989934   66615 cri.go:89] found id: ""
	I0429 20:08:39.989964   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.989972   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:39.989978   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:39.990032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:40.025936   66615 cri.go:89] found id: ""
	I0429 20:08:40.025965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.025976   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:40.025983   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:40.026044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:40.065943   66615 cri.go:89] found id: ""
	I0429 20:08:40.065965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.065976   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:40.065984   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:40.066038   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:40.109986   66615 cri.go:89] found id: ""
	I0429 20:08:40.110018   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.110030   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:40.110038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:40.110115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:40.155610   66615 cri.go:89] found id: ""
	I0429 20:08:40.155716   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.155734   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:40.155745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:40.155803   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:40.196213   66615 cri.go:89] found id: ""
	I0429 20:08:40.196239   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.196246   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:40.196256   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:40.196272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:40.280330   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:40.280372   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:40.326774   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:40.326810   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:40.379438   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:40.379475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:40.395332   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:40.395362   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:40.504413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:43.005046   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:43.020464   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:43.020544   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:43.066403   66615 cri.go:89] found id: ""
	I0429 20:08:43.066432   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.066444   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:43.066452   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:43.066548   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:43.109732   66615 cri.go:89] found id: ""
	I0429 20:08:43.109760   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.109771   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:43.109778   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:43.109850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:43.158457   66615 cri.go:89] found id: ""
	I0429 20:08:43.158483   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.158492   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:43.158498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:43.158561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:43.207170   66615 cri.go:89] found id: ""
	I0429 20:08:43.207201   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.207213   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:43.207221   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:43.207281   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:43.246746   66615 cri.go:89] found id: ""
	I0429 20:08:43.246783   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.246804   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:43.246811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:43.246875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:43.292786   66615 cri.go:89] found id: ""
	I0429 20:08:43.292813   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.292824   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:43.292831   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:43.292896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:43.337509   66615 cri.go:89] found id: ""
	I0429 20:08:43.337537   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.337546   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:43.337551   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:43.337601   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:43.378446   66615 cri.go:89] found id: ""
	I0429 20:08:43.378473   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.378481   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:43.378490   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:43.378502   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:43.460438   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:43.460474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:43.503908   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:43.503945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:43.561661   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:43.561699   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:43.577924   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:43.577954   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:43.667006   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:40.555875   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:43.057183   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:44.049342   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.049873   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:44.256458   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.256554   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.168175   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:46.212494   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:46.212579   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:46.251567   66615 cri.go:89] found id: ""
	I0429 20:08:46.251593   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.251603   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:46.251610   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:46.251673   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:46.291913   66615 cri.go:89] found id: ""
	I0429 20:08:46.291943   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.291955   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:46.291962   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:46.292023   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:46.331801   66615 cri.go:89] found id: ""
	I0429 20:08:46.331827   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.331836   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:46.331842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:46.331899   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:46.375956   66615 cri.go:89] found id: ""
	I0429 20:08:46.375989   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.376001   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:46.376008   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:46.376090   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:46.425572   66615 cri.go:89] found id: ""
	I0429 20:08:46.425599   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.425609   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:46.425618   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:46.425681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:46.468161   66615 cri.go:89] found id: ""
	I0429 20:08:46.468226   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.468249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:46.468263   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:46.468433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:46.512163   66615 cri.go:89] found id: ""
	I0429 20:08:46.512193   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.512205   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:46.512212   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:46.512277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:46.556047   66615 cri.go:89] found id: ""
	I0429 20:08:46.556078   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.556088   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:46.556099   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:46.556111   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:46.609886   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:46.609921   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:46.625848   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:46.625878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:46.699005   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:46.699037   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:46.699053   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:46.783886   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:46.783923   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.331288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:49.344805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:49.344864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:49.381576   66615 cri.go:89] found id: ""
	I0429 20:08:49.381598   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.381605   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:49.381619   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:49.381667   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:49.418276   66615 cri.go:89] found id: ""
	I0429 20:08:49.418316   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.418329   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:49.418336   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:49.418389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:49.460147   66615 cri.go:89] found id: ""
	I0429 20:08:49.460177   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.460188   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:49.460195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:49.460253   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:49.500534   66615 cri.go:89] found id: ""
	I0429 20:08:49.500562   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.500569   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:49.500575   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:49.500632   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:49.538481   66615 cri.go:89] found id: ""
	I0429 20:08:49.538521   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.538534   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:49.538541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:49.538603   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:49.580192   66615 cri.go:89] found id: ""
	I0429 20:08:49.580218   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.580228   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:49.580234   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:49.580299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:49.616400   66615 cri.go:89] found id: ""
	I0429 20:08:49.616427   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.616437   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:49.616444   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:49.616551   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:49.652871   66615 cri.go:89] found id: ""
	I0429 20:08:49.652900   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.652918   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:49.652931   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:49.652947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:49.728173   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:49.728200   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:49.728212   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:49.813701   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:49.813749   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.855685   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:49.855712   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:49.906480   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:49.906514   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:45.559939   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.056008   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.056054   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.052578   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.550638   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.550910   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.257460   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.259418   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.757365   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.422430   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:52.437412   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:52.437488   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:52.476896   66615 cri.go:89] found id: ""
	I0429 20:08:52.476919   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.476927   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:52.476932   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:52.476976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:52.517266   66615 cri.go:89] found id: ""
	I0429 20:08:52.517298   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.517310   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:52.517318   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:52.517381   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:52.560886   66615 cri.go:89] found id: ""
	I0429 20:08:52.560909   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.560917   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:52.560922   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:52.560969   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:52.601362   66615 cri.go:89] found id: ""
	I0429 20:08:52.601398   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.601419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:52.601429   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:52.601506   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:52.639544   66615 cri.go:89] found id: ""
	I0429 20:08:52.639580   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.639591   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:52.639599   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:52.639652   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:52.681088   66615 cri.go:89] found id: ""
	I0429 20:08:52.681120   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.681130   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:52.681138   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:52.681204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:52.721777   66615 cri.go:89] found id: ""
	I0429 20:08:52.721802   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.721820   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:52.721828   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:52.721900   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:52.762823   66615 cri.go:89] found id: ""
	I0429 20:08:52.762845   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.762856   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:52.762863   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:52.762875   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:52.819291   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:52.819326   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:52.847120   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:52.847165   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:52.956274   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:52.956301   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:52.956317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:53.041636   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:53.041676   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:52.056558   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:54.555745   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.051656   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:57.549668   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.257083   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:57.757855   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.592636   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:55.607372   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:55.607449   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:55.643959   66615 cri.go:89] found id: ""
	I0429 20:08:55.643991   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.644000   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:55.644005   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:55.644061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:55.682272   66615 cri.go:89] found id: ""
	I0429 20:08:55.682304   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.682315   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:55.682323   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:55.682384   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:55.720157   66615 cri.go:89] found id: ""
	I0429 20:08:55.720189   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.720200   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:55.720207   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:55.720272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:55.761748   66615 cri.go:89] found id: ""
	I0429 20:08:55.761773   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.761781   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:55.761786   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:55.761842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:55.802377   66615 cri.go:89] found id: ""
	I0429 20:08:55.802405   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.802416   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:55.802423   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:55.802494   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:55.838986   66615 cri.go:89] found id: ""
	I0429 20:08:55.839016   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.839024   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:55.839030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:55.839077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:55.874991   66615 cri.go:89] found id: ""
	I0429 20:08:55.875022   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.875032   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:55.875039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:55.875106   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:55.913561   66615 cri.go:89] found id: ""
	I0429 20:08:55.913595   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.913607   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:55.913618   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:55.913633   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:55.965355   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:55.965391   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:55.981222   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:55.981259   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:56.056656   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:56.056685   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:56.056701   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.135276   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:56.135309   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:58.682855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:58.701679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:58.701769   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:58.760807   66615 cri.go:89] found id: ""
	I0429 20:08:58.760828   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.760841   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:58.760858   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:58.760910   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:58.835167   66615 cri.go:89] found id: ""
	I0429 20:08:58.835204   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.835216   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:58.835223   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:58.835289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:58.877367   66615 cri.go:89] found id: ""
	I0429 20:08:58.877398   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.877409   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:58.877417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:58.877483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:58.923726   66615 cri.go:89] found id: ""
	I0429 20:08:58.923751   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.923760   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:58.923766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:58.923817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:58.967780   66615 cri.go:89] found id: ""
	I0429 20:08:58.967804   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.967811   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:58.967816   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:58.967865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:59.010646   66615 cri.go:89] found id: ""
	I0429 20:08:59.010682   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.010690   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:59.010697   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:59.010759   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:59.057380   66615 cri.go:89] found id: ""
	I0429 20:08:59.057408   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.057418   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:59.057426   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:59.057483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:59.099669   66615 cri.go:89] found id: ""
	I0429 20:08:59.099698   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.099706   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:59.099715   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:59.099731   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:59.146831   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:59.146861   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:59.204232   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:59.204274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:59.219799   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:59.219824   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:59.305438   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:59.305465   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:59.305481   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.555976   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:58.557892   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:00.049511   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:02.050709   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:00.256064   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:02.257053   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:01.885861   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:01.900746   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:01.900808   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:01.942174   66615 cri.go:89] found id: ""
	I0429 20:09:01.942210   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.942218   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:01.942224   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:01.942285   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:01.986463   66615 cri.go:89] found id: ""
	I0429 20:09:01.986491   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.986502   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:01.986509   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:01.986570   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:02.026290   66615 cri.go:89] found id: ""
	I0429 20:09:02.026314   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.026321   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:02.026327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:02.026375   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:02.064239   66615 cri.go:89] found id: ""
	I0429 20:09:02.064259   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.064266   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:02.064271   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:02.064321   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:02.105807   66615 cri.go:89] found id: ""
	I0429 20:09:02.105838   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.105857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:02.105866   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:02.105926   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:02.144939   66615 cri.go:89] found id: ""
	I0429 20:09:02.144962   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.144970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:02.144975   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:02.145037   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:02.192866   66615 cri.go:89] found id: ""
	I0429 20:09:02.192891   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.192899   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:02.192905   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:02.192955   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:02.232485   66615 cri.go:89] found id: ""
	I0429 20:09:02.232515   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.232524   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:02.232533   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:02.232550   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:02.287374   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:02.287402   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:02.302979   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:02.303009   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:02.380693   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:02.380713   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:02.380725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:02.467048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:02.467084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:01.055311   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:03.055538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:05.056325   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:04.051014   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:06.556497   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:04.758329   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:07.256328   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:05.018176   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:05.033178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:05.033238   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:05.079008   66615 cri.go:89] found id: ""
	I0429 20:09:05.079034   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.079043   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:05.079050   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:05.079113   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:05.118620   66615 cri.go:89] found id: ""
	I0429 20:09:05.118642   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.118650   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:05.118655   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:05.118714   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:05.159603   66615 cri.go:89] found id: ""
	I0429 20:09:05.159646   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.159660   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:05.159666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:05.159733   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:05.200224   66615 cri.go:89] found id: ""
	I0429 20:09:05.200252   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.200262   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:05.200270   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:05.200344   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:05.246341   66615 cri.go:89] found id: ""
	I0429 20:09:05.246384   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.246396   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:05.246403   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:05.246471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:05.286126   66615 cri.go:89] found id: ""
	I0429 20:09:05.286153   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.286163   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:05.286171   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:05.286235   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:05.326911   66615 cri.go:89] found id: ""
	I0429 20:09:05.326941   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.326952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:05.326958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:05.327019   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:05.365564   66615 cri.go:89] found id: ""
	I0429 20:09:05.365592   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.365602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:05.365621   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:05.365637   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:05.445857   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:05.445877   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:05.445889   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:05.530129   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:05.530164   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:05.573936   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:05.573971   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:05.631263   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:05.631299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.147288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:08.162949   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:08.163021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:08.203009   66615 cri.go:89] found id: ""
	I0429 20:09:08.203033   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.203041   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:08.203047   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:08.203112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:08.241708   66615 cri.go:89] found id: ""
	I0429 20:09:08.241735   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.241744   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:08.241750   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:08.241801   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:08.283976   66615 cri.go:89] found id: ""
	I0429 20:09:08.284005   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.284017   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:08.284023   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:08.284091   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:08.323909   66615 cri.go:89] found id: ""
	I0429 20:09:08.323939   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.323951   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:08.323962   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:08.324031   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:08.363236   66615 cri.go:89] found id: ""
	I0429 20:09:08.363263   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.363271   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:08.363276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:08.363328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:08.401767   66615 cri.go:89] found id: ""
	I0429 20:09:08.401790   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.401798   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:08.401803   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:08.401851   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:08.443678   66615 cri.go:89] found id: ""
	I0429 20:09:08.443709   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.443726   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:08.443731   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:08.443791   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:08.489025   66615 cri.go:89] found id: ""
	I0429 20:09:08.489069   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.489103   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:08.489129   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:08.489163   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:08.543421   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:08.543462   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.560425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:08.560459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:08.642819   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:08.642840   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:08.642855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:08.726644   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:08.726682   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:07.555523   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.556138   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.049664   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.050246   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.256452   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.257458   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.277817   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:11.292340   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:11.292420   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:11.330721   66615 cri.go:89] found id: ""
	I0429 20:09:11.330756   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.330768   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:11.330776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:11.330850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:11.372057   66615 cri.go:89] found id: ""
	I0429 20:09:11.372089   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.372098   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:11.372103   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:11.372155   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:11.414786   66615 cri.go:89] found id: ""
	I0429 20:09:11.414814   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.414825   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:11.414832   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:11.414898   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:11.454934   66615 cri.go:89] found id: ""
	I0429 20:09:11.454961   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.454969   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:11.454974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:11.455039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:11.494169   66615 cri.go:89] found id: ""
	I0429 20:09:11.494200   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.494211   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:11.494217   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:11.494277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:11.541646   66615 cri.go:89] found id: ""
	I0429 20:09:11.541684   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.541694   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:11.541701   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:11.541766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:11.584025   66615 cri.go:89] found id: ""
	I0429 20:09:11.584055   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.584067   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:11.584075   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:11.584138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:11.622425   66615 cri.go:89] found id: ""
	I0429 20:09:11.622459   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.622471   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:11.622481   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:11.622493   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:11.676416   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:11.676450   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:11.693793   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:11.693822   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:11.771410   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:11.771437   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:11.771454   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:11.854969   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:11.855047   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:14.398871   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:14.415894   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:14.415983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:14.454718   66615 cri.go:89] found id: ""
	I0429 20:09:14.454752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.454763   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:14.454773   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:14.454836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:14.498562   66615 cri.go:89] found id: ""
	I0429 20:09:14.498591   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.498602   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:14.498609   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:14.498669   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:14.536357   66615 cri.go:89] found id: ""
	I0429 20:09:14.536384   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.536395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:14.536402   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:14.536460   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:14.577240   66615 cri.go:89] found id: ""
	I0429 20:09:14.577274   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.577284   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:14.577291   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:14.577372   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:14.617231   66615 cri.go:89] found id: ""
	I0429 20:09:14.617266   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.617279   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:14.617287   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:14.617355   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:14.659053   66615 cri.go:89] found id: ""
	I0429 20:09:14.659081   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.659090   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:14.659096   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:14.659145   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:14.708723   66615 cri.go:89] found id: ""
	I0429 20:09:14.708752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.708760   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:14.708766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:14.708814   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:14.753732   66615 cri.go:89] found id: ""
	I0429 20:09:14.753762   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.753773   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:14.753783   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:14.753798   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:14.771952   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:14.771985   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:14.842649   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:14.842680   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:14.842696   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:14.925565   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:14.925603   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:11.556903   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:14.057196   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:13.550999   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:16.054439   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:13.257735   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:15.756651   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:17.756760   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:14.975731   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:14.975765   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.528872   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:17.544373   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:17.544455   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:17.582977   66615 cri.go:89] found id: ""
	I0429 20:09:17.583001   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.583009   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:17.583014   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:17.583079   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:17.620322   66615 cri.go:89] found id: ""
	I0429 20:09:17.620352   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.620368   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:17.620373   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:17.620421   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:17.664339   66615 cri.go:89] found id: ""
	I0429 20:09:17.664367   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.664375   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:17.664381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:17.664433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:17.705150   66615 cri.go:89] found id: ""
	I0429 20:09:17.705175   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.705184   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:17.705189   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:17.705239   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:17.749713   66615 cri.go:89] found id: ""
	I0429 20:09:17.749738   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.749747   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:17.749752   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:17.749850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:17.791528   66615 cri.go:89] found id: ""
	I0429 20:09:17.791552   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.791560   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:17.791566   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:17.791615   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:17.834994   66615 cri.go:89] found id: ""
	I0429 20:09:17.835024   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.835035   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:17.835050   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:17.835107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:17.872194   66615 cri.go:89] found id: ""
	I0429 20:09:17.872226   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.872236   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:17.872248   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:17.872263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.926899   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:17.926936   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:17.944184   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:17.944218   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:18.029224   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:18.029246   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:18.029258   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:18.111112   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:18.111147   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:16.557282   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:19.056682   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:18.549106   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:20.550026   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:19.758897   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:22.257104   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:20.655965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:20.671420   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:20.671487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:20.710100   66615 cri.go:89] found id: ""
	I0429 20:09:20.710132   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.710144   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:20.710151   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:20.710221   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:20.748849   66615 cri.go:89] found id: ""
	I0429 20:09:20.748877   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.748888   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:20.748894   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:20.748956   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:20.788113   66615 cri.go:89] found id: ""
	I0429 20:09:20.788140   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.788151   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:20.788157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:20.788217   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:20.831432   66615 cri.go:89] found id: ""
	I0429 20:09:20.831455   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.831462   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:20.831470   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:20.831518   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:20.878156   66615 cri.go:89] found id: ""
	I0429 20:09:20.878183   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.878191   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:20.878197   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:20.878262   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:20.920691   66615 cri.go:89] found id: ""
	I0429 20:09:20.920718   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.920729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:20.920735   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:20.920795   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:20.960674   66615 cri.go:89] found id: ""
	I0429 20:09:20.960709   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.960719   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:20.960726   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:20.960786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:21.006462   66615 cri.go:89] found id: ""
	I0429 20:09:21.006486   66615 logs.go:276] 0 containers: []
	W0429 20:09:21.006495   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:21.006503   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:21.006518   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.060040   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:21.060076   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:21.077141   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:21.077171   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:21.157058   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:21.157083   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:21.157096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:21.265626   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:21.265662   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:23.813718   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:23.828338   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:23.828400   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:23.868730   66615 cri.go:89] found id: ""
	I0429 20:09:23.868760   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.868771   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:23.868776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:23.868842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:23.907919   66615 cri.go:89] found id: ""
	I0429 20:09:23.907941   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.907949   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:23.907956   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:23.908011   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:23.956769   66615 cri.go:89] found id: ""
	I0429 20:09:23.956794   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.956805   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:23.956811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:23.956875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:23.998578   66615 cri.go:89] found id: ""
	I0429 20:09:23.998612   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.998621   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:23.998628   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:23.998681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:24.037458   66615 cri.go:89] found id: ""
	I0429 20:09:24.037485   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.037492   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:24.037499   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:24.037562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:24.078305   66615 cri.go:89] found id: ""
	I0429 20:09:24.078336   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.078351   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:24.078358   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:24.078418   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:24.120100   66615 cri.go:89] found id: ""
	I0429 20:09:24.120129   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.120139   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:24.120147   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:24.120211   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:24.160953   66615 cri.go:89] found id: ""
	I0429 20:09:24.160988   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.161000   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:24.161012   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:24.161029   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:24.176654   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:24.176686   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:24.256631   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:24.256652   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:24.256668   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:24.335379   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:24.335424   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:24.379616   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:24.379649   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.556726   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:24.057483   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:23.050004   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:25.550882   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:27.551051   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:24.257726   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:26.757098   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:26.937283   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:26.956185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:26.956252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:26.997000   66615 cri.go:89] found id: ""
	I0429 20:09:26.997034   66615 logs.go:276] 0 containers: []
	W0429 20:09:26.997046   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:26.997053   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:26.997115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:27.042494   66615 cri.go:89] found id: ""
	I0429 20:09:27.042527   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.042538   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:27.042546   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:27.042608   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:27.086170   66615 cri.go:89] found id: ""
	I0429 20:09:27.086199   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.086211   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:27.086218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:27.086282   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:27.126502   66615 cri.go:89] found id: ""
	I0429 20:09:27.126531   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.126542   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:27.126560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:27.126635   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:27.175102   66615 cri.go:89] found id: ""
	I0429 20:09:27.175134   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.175142   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:27.175148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:27.175216   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:27.215983   66615 cri.go:89] found id: ""
	I0429 20:09:27.216013   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.216025   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:27.216033   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:27.216097   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:27.256427   66615 cri.go:89] found id: ""
	I0429 20:09:27.256456   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.256467   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:27.256474   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:27.256540   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:27.298444   66615 cri.go:89] found id: ""
	I0429 20:09:27.298479   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.298490   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:27.298501   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:27.298517   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:27.381579   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:27.381625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:27.429304   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:27.429350   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:27.483044   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:27.483082   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:27.500304   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:27.500332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:27.583909   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:26.555285   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:28.560544   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:30.049769   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:32.050537   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:29.256689   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:31.257554   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:30.084904   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:30.102417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:30.102486   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:30.146726   66615 cri.go:89] found id: ""
	I0429 20:09:30.146748   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.146755   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:30.146761   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:30.146809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:30.190739   66615 cri.go:89] found id: ""
	I0429 20:09:30.190768   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.190780   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:30.190788   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:30.190853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:30.228836   66615 cri.go:89] found id: ""
	I0429 20:09:30.228864   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.228879   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:30.228887   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:30.228951   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:30.270876   66615 cri.go:89] found id: ""
	I0429 20:09:30.270912   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.270920   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:30.270925   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:30.270995   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:30.310762   66615 cri.go:89] found id: ""
	I0429 20:09:30.310787   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.310795   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:30.310801   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:30.310850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:30.356339   66615 cri.go:89] found id: ""
	I0429 20:09:30.356363   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.356371   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:30.356376   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:30.356430   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:30.395540   66615 cri.go:89] found id: ""
	I0429 20:09:30.395575   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.395589   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:30.395598   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:30.395671   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:30.446237   66615 cri.go:89] found id: ""
	I0429 20:09:30.446263   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.446276   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:30.446286   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:30.446301   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:30.537309   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:30.537334   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:30.537349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:30.629116   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:30.629151   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:30.683308   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:30.683337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:30.735879   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:30.735910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.252322   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:33.268276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:33.268351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:33.309531   66615 cri.go:89] found id: ""
	I0429 20:09:33.309622   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.309641   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:33.309650   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:33.309719   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:33.367480   66615 cri.go:89] found id: ""
	I0429 20:09:33.367515   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.367527   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:33.367535   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:33.367595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:33.433717   66615 cri.go:89] found id: ""
	I0429 20:09:33.433742   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.433751   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:33.433756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:33.433820   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:33.484053   66615 cri.go:89] found id: ""
	I0429 20:09:33.484081   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.484093   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:33.484100   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:33.484165   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:33.524103   66615 cri.go:89] found id: ""
	I0429 20:09:33.524126   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.524136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:33.524143   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:33.524204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:33.565692   66615 cri.go:89] found id: ""
	I0429 20:09:33.565711   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.565719   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:33.565724   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:33.565784   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:33.607119   66615 cri.go:89] found id: ""
	I0429 20:09:33.607143   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.607153   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:33.607160   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:33.607225   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:33.648407   66615 cri.go:89] found id: ""
	I0429 20:09:33.648432   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.648440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:33.648449   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:33.648463   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:33.730744   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:33.730781   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:33.774295   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:33.774328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:33.829609   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:33.829653   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.846048   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:33.846092   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:33.924413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:31.056307   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:33.056538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:34.548872   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.550765   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:33.758571   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.257361   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.425072   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:36.440185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:36.440268   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:36.484364   66615 cri.go:89] found id: ""
	I0429 20:09:36.484386   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.484394   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:36.484400   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:36.484450   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:36.520436   66615 cri.go:89] found id: ""
	I0429 20:09:36.520466   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.520478   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:36.520487   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:36.520549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:36.563597   66615 cri.go:89] found id: ""
	I0429 20:09:36.563622   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.563630   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:36.563635   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:36.563704   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:36.613106   66615 cri.go:89] found id: ""
	I0429 20:09:36.613134   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.613143   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:36.613148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:36.613204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:36.658127   66615 cri.go:89] found id: ""
	I0429 20:09:36.658151   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.658159   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:36.658166   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:36.658229   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:36.707388   66615 cri.go:89] found id: ""
	I0429 20:09:36.707415   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.707423   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:36.707430   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:36.707479   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:36.753363   66615 cri.go:89] found id: ""
	I0429 20:09:36.753394   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.753405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:36.753413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:36.753475   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:36.801492   66615 cri.go:89] found id: ""
	I0429 20:09:36.801513   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.801521   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:36.801530   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:36.801542   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:36.857055   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:36.857108   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:36.874567   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:36.874595   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:36.956176   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:36.956202   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:36.956217   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:37.039958   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:37.039997   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:39.591442   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:39.607842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:39.607927   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:39.651917   66615 cri.go:89] found id: ""
	I0429 20:09:39.651941   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.651948   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:39.651955   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:39.652020   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:39.690032   66615 cri.go:89] found id: ""
	I0429 20:09:39.690059   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.690078   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:39.690086   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:39.690152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:39.733176   66615 cri.go:89] found id: ""
	I0429 20:09:39.733200   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.733209   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:39.733215   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:39.733261   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:39.779528   66615 cri.go:89] found id: ""
	I0429 20:09:39.779560   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.779572   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:39.779581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:39.779650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:39.822408   66615 cri.go:89] found id: ""
	I0429 20:09:39.822436   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.822445   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:39.822452   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:39.822522   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:39.864895   66615 cri.go:89] found id: ""
	I0429 20:09:39.864922   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.864930   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:39.864938   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:39.865008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:39.907498   66615 cri.go:89] found id: ""
	I0429 20:09:39.907523   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.907533   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:39.907539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:39.907606   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:39.948400   66615 cri.go:89] found id: ""
	I0429 20:09:39.948430   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.948440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:39.948449   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:39.948465   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:35.557262   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:38.056877   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:40.058568   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:39.049938   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:41.050139   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:38.756883   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:41.256775   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:39.964733   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:39.964763   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:40.043568   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:40.043593   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:40.043609   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:40.130776   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:40.130815   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:40.182011   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:40.182042   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:42.739068   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:42.756144   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:42.756286   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:42.798776   66615 cri.go:89] found id: ""
	I0429 20:09:42.798801   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.798810   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:42.798815   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:42.798861   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:42.837122   66615 cri.go:89] found id: ""
	I0429 20:09:42.837146   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.837154   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:42.837159   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:42.837205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:42.875435   66615 cri.go:89] found id: ""
	I0429 20:09:42.875461   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.875471   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:42.875479   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:42.875536   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:42.920044   66615 cri.go:89] found id: ""
	I0429 20:09:42.920076   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.920087   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:42.920094   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:42.920175   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:42.960122   66615 cri.go:89] found id: ""
	I0429 20:09:42.960152   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.960163   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:42.960169   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:42.960215   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:42.999784   66615 cri.go:89] found id: ""
	I0429 20:09:42.999811   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.999829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:42.999837   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:42.999917   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:43.040882   66615 cri.go:89] found id: ""
	I0429 20:09:43.040930   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.040952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:43.040959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:43.041044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:43.082596   66615 cri.go:89] found id: ""
	I0429 20:09:43.082627   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.082639   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:43.082650   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:43.082672   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:43.140302   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:43.140343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:43.157508   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:43.157547   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:43.241025   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:43.241047   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:43.241061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:43.325820   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:43.325855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:42.058727   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:44.556415   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:43.051020   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.550017   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:43.258400   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.756441   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:47.757029   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.871561   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:45.887323   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:45.887398   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:45.930021   66615 cri.go:89] found id: ""
	I0429 20:09:45.930050   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.930062   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:45.930088   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:45.930148   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:45.971404   66615 cri.go:89] found id: ""
	I0429 20:09:45.971434   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.971445   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:45.971452   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:45.971513   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:46.018801   66615 cri.go:89] found id: ""
	I0429 20:09:46.018825   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.018833   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:46.018838   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:46.018886   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:46.065118   66615 cri.go:89] found id: ""
	I0429 20:09:46.065140   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.065148   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:46.065153   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:46.065201   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:46.105244   66615 cri.go:89] found id: ""
	I0429 20:09:46.105271   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.105294   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:46.105309   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:46.105373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:46.153736   66615 cri.go:89] found id: ""
	I0429 20:09:46.153759   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.153768   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:46.153773   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:46.153836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:46.198940   66615 cri.go:89] found id: ""
	I0429 20:09:46.198965   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.198973   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:46.198979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:46.199064   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:46.238001   66615 cri.go:89] found id: ""
	I0429 20:09:46.238031   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.238044   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:46.238056   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:46.238087   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:46.292309   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:46.292357   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:46.307243   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:46.307274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:46.386832   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.386852   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:46.386869   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:46.468856   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:46.468891   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.017354   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:49.032753   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:49.032832   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:49.075345   66615 cri.go:89] found id: ""
	I0429 20:09:49.075375   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.075388   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:49.075394   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:49.075447   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:49.115294   66615 cri.go:89] found id: ""
	I0429 20:09:49.115328   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.115339   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:49.115347   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:49.115412   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:49.164115   66615 cri.go:89] found id: ""
	I0429 20:09:49.164140   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.164148   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:49.164154   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:49.164210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:49.207643   66615 cri.go:89] found id: ""
	I0429 20:09:49.207668   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.207679   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:49.207698   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:49.207762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:49.247121   66615 cri.go:89] found id: ""
	I0429 20:09:49.247147   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.247156   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:49.247162   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:49.247220   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:49.288594   66615 cri.go:89] found id: ""
	I0429 20:09:49.288626   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.288636   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:49.288643   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:49.288711   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:49.330243   66615 cri.go:89] found id: ""
	I0429 20:09:49.330273   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.330290   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:49.330300   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:49.330365   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:49.371304   66615 cri.go:89] found id: ""
	I0429 20:09:49.371348   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.371360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:49.371372   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:49.371392   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:49.450910   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:49.450949   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.494940   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:49.494970   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:49.553320   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:49.553364   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:49.568850   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:49.568878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:49.644932   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.559246   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:49.056790   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:48.050285   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:50.050579   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.549882   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:49.757113   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.258680   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.145702   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:52.162681   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:52.162756   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:52.204816   66615 cri.go:89] found id: ""
	I0429 20:09:52.204858   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.204870   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:52.204888   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:52.204963   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:52.248481   66615 cri.go:89] found id: ""
	I0429 20:09:52.248510   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.248519   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:52.248525   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:52.248596   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:52.289158   66615 cri.go:89] found id: ""
	I0429 20:09:52.289186   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.289194   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:52.289200   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:52.289260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:52.329905   66615 cri.go:89] found id: ""
	I0429 20:09:52.329931   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.329942   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:52.329950   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:52.330025   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:52.372523   66615 cri.go:89] found id: ""
	I0429 20:09:52.372546   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.372554   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:52.372560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:52.372623   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:52.414936   66615 cri.go:89] found id: ""
	I0429 20:09:52.414970   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.414982   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:52.414989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:52.415056   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:52.454139   66615 cri.go:89] found id: ""
	I0429 20:09:52.454164   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.454172   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:52.454178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:52.454236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:52.494093   66615 cri.go:89] found id: ""
	I0429 20:09:52.494129   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.494142   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:52.494155   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:52.494195   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:52.552104   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:52.552142   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:52.568430   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:52.568459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:52.649708   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:52.649736   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:52.649752   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:52.746231   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:52.746272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:51.057536   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:53.556862   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:55.049835   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:57.050606   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:54.759308   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:57.256396   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:55.296228   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:55.311257   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:55.311328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:55.352071   66615 cri.go:89] found id: ""
	I0429 20:09:55.352098   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.352109   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:55.352116   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:55.352177   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:55.399806   66615 cri.go:89] found id: ""
	I0429 20:09:55.399837   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.399847   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:55.399860   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:55.399947   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:55.444372   66615 cri.go:89] found id: ""
	I0429 20:09:55.444398   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.444406   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:55.444411   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:55.444468   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:55.485542   66615 cri.go:89] found id: ""
	I0429 20:09:55.485568   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.485579   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:55.485586   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:55.485670   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:55.535452   66615 cri.go:89] found id: ""
	I0429 20:09:55.535483   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.535494   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:55.535502   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:55.535566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:55.578009   66615 cri.go:89] found id: ""
	I0429 20:09:55.578036   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.578048   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:55.578056   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:55.578138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:55.618302   66615 cri.go:89] found id: ""
	I0429 20:09:55.618336   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.618347   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:55.618355   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:55.618419   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:55.660489   66615 cri.go:89] found id: ""
	I0429 20:09:55.660518   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.660526   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:55.660535   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:55.660548   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:55.713953   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:55.713993   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:55.729624   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:55.729656   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:55.813718   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:55.813746   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:55.813762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:55.898805   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:55.898849   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:58.467014   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:58.482852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:58.482925   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:58.522862   66615 cri.go:89] found id: ""
	I0429 20:09:58.522896   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.522908   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:58.522916   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:58.523000   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:58.568234   66615 cri.go:89] found id: ""
	I0429 20:09:58.568259   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.568266   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:58.568272   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:58.568327   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:58.609147   66615 cri.go:89] found id: ""
	I0429 20:09:58.609175   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.609185   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:58.609192   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:58.609265   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:58.657074   66615 cri.go:89] found id: ""
	I0429 20:09:58.657104   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.657115   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:58.657122   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:58.657186   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:58.706819   66615 cri.go:89] found id: ""
	I0429 20:09:58.706846   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.706857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:58.706865   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:58.706929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:58.754967   66615 cri.go:89] found id: ""
	I0429 20:09:58.754998   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.755007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:58.755018   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:58.755078   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:58.793657   66615 cri.go:89] found id: ""
	I0429 20:09:58.793694   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.793704   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:58.793709   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:58.793766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:58.832023   66615 cri.go:89] found id: ""
	I0429 20:09:58.832055   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.832066   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:58.832078   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:58.832094   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:58.886568   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:58.886605   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:58.902126   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:58.902154   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:58.986786   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:58.986814   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:58.986831   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:59.072258   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:59.072296   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:55.557245   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:58.056570   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:59.549825   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:02.050651   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:59.756493   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:01.756935   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:01.620172   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:01.636958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:01.637055   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:01.703865   66615 cri.go:89] found id: ""
	I0429 20:10:01.703890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.703899   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:01.703905   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:01.703950   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:01.742655   66615 cri.go:89] found id: ""
	I0429 20:10:01.742684   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.742692   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:01.742707   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:01.742778   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:01.782866   66615 cri.go:89] found id: ""
	I0429 20:10:01.782890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.782901   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:01.782908   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:01.782964   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:01.822958   66615 cri.go:89] found id: ""
	I0429 20:10:01.822984   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.822992   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:01.822997   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:01.823044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:01.868581   66615 cri.go:89] found id: ""
	I0429 20:10:01.868604   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.868612   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:01.868622   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:01.868675   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:01.908216   66615 cri.go:89] found id: ""
	I0429 20:10:01.908241   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.908249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:01.908255   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:01.908328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:01.953100   66615 cri.go:89] found id: ""
	I0429 20:10:01.953131   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.953142   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:01.953150   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:01.953213   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:01.999940   66615 cri.go:89] found id: ""
	I0429 20:10:01.999974   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.999988   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:01.999999   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:02.000012   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:02.061669   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:02.061704   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:02.077609   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:02.077640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:02.169643   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:02.169666   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:02.169679   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:02.250615   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:02.250657   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:04.803629   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:04.819286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:04.819364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:04.860501   66615 cri.go:89] found id: ""
	I0429 20:10:04.860530   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.860541   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:04.860548   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:04.860672   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:04.898444   66615 cri.go:89] found id: ""
	I0429 20:10:04.898472   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.898480   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:04.898486   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:04.898546   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:04.936569   66615 cri.go:89] found id: ""
	I0429 20:10:04.936599   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.936609   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:04.936617   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:04.936695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:00.556325   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:02.557754   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:05.058245   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:04.551711   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:07.050327   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:03.757096   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:06.257529   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:04.979667   66615 cri.go:89] found id: ""
	I0429 20:10:04.979696   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.979708   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:04.979715   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:04.979768   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:05.019608   66615 cri.go:89] found id: ""
	I0429 20:10:05.019638   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.019650   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:05.019658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:05.019724   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:05.063723   66615 cri.go:89] found id: ""
	I0429 20:10:05.063749   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.063758   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:05.063765   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:05.063821   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:05.106676   66615 cri.go:89] found id: ""
	I0429 20:10:05.106704   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.106714   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:05.106721   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:05.106783   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:05.147652   66615 cri.go:89] found id: ""
	I0429 20:10:05.147683   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.147693   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:05.147704   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:05.147721   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:05.189048   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:05.189085   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:05.248635   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:05.248669   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:05.265791   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:05.265826   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:05.343190   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:05.343217   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:05.343234   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:07.926868   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:07.942581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:07.942656   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:07.981316   66615 cri.go:89] found id: ""
	I0429 20:10:07.981349   66615 logs.go:276] 0 containers: []
	W0429 20:10:07.981361   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:07.981368   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:07.981429   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:08.024017   66615 cri.go:89] found id: ""
	I0429 20:10:08.024045   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.024056   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:08.024062   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:08.024146   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:08.075761   66615 cri.go:89] found id: ""
	I0429 20:10:08.075786   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.075798   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:08.075805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:08.075864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:08.146501   66615 cri.go:89] found id: ""
	I0429 20:10:08.146528   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.146536   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:08.146541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:08.146624   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:08.204987   66615 cri.go:89] found id: ""
	I0429 20:10:08.205013   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.205021   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:08.205027   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:08.205083   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:08.244930   66615 cri.go:89] found id: ""
	I0429 20:10:08.244959   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.244970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:08.244979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:08.245040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:08.284204   66615 cri.go:89] found id: ""
	I0429 20:10:08.284232   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.284243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:08.284250   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:08.284305   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:08.324077   66615 cri.go:89] found id: ""
	I0429 20:10:08.324102   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.324113   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:08.324123   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:08.324139   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:08.341584   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:08.341614   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:08.429808   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:08.429827   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:08.429840   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:08.509906   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:08.509942   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:08.562662   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:08.562697   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:07.557462   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:10.055718   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:09.553108   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:12.050533   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:12.543954   66218 pod_ready.go:81] duration metric: took 4m0.001047967s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" ...
	E0429 20:10:12.543994   66218 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 20:10:12.544032   66218 pod_ready.go:38] duration metric: took 4m6.615064199s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:10:12.544058   66218 kubeadm.go:591] duration metric: took 4m18.60301174s to restartPrimaryControlPlane
	W0429 20:10:12.544116   66218 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:12.544146   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:08.757127   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:10.760764   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:11.121673   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:11.137328   66615 kubeadm.go:591] duration metric: took 4m4.72832668s to restartPrimaryControlPlane
	W0429 20:10:11.137411   66615 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:11.137446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:13.254357   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.116867978s)
	I0429 20:10:13.254436   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:13.275293   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:13.287073   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:13.298046   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:13.298080   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:13.298132   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:13.311790   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:13.311861   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:13.323201   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:13.334284   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:13.334357   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:13.348597   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.361993   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:13.362055   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.376185   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:13.389715   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:13.389778   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:13.403955   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:13.675887   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:10:12.056403   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:14.059895   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:13.257345   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:15.257388   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:17.259138   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:16.557200   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:18.559617   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:19.756708   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:21.757655   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:21.056581   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:23.057477   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:24.256386   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:26.757303   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:25.556902   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:28.055172   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:30.056549   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:29.256790   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:31.757538   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:32.560174   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:35.056286   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:33.758717   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:36.257274   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:37.056603   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:39.557292   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:38.757913   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:40.758857   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:42.056927   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:44.557003   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:44.557038   66875 pod_ready.go:81] duration metric: took 4m0.008018273s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	E0429 20:10:44.557050   66875 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0429 20:10:44.557062   66875 pod_ready.go:38] duration metric: took 4m2.911025288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:10:44.557085   66875 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:10:44.557123   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:44.557191   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:44.620871   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:44.620900   66875 cri.go:89] found id: ""
	I0429 20:10:44.620910   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:44.620970   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.626852   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:44.626919   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:44.673726   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:44.673753   66875 cri.go:89] found id: ""
	I0429 20:10:44.673762   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:44.673827   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.680083   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:44.680157   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:44.724866   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:44.724899   66875 cri.go:89] found id: ""
	I0429 20:10:44.724909   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:44.724976   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.730438   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:44.730492   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:44.785159   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:44.785178   66875 cri.go:89] found id: ""
	I0429 20:10:44.785185   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:44.785230   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.790370   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:44.790432   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:44.839200   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:44.839219   66875 cri.go:89] found id: ""
	I0429 20:10:44.839226   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:44.839277   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.845411   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:44.845490   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:44.907184   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:44.907210   66875 cri.go:89] found id: ""
	I0429 20:10:44.907224   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:44.907281   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.914531   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:44.914596   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:44.957389   66875 cri.go:89] found id: ""
	I0429 20:10:44.957422   66875 logs.go:276] 0 containers: []
	W0429 20:10:44.957430   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:44.957436   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:44.957493   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:45.001760   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:45.001783   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:45.001789   66875 cri.go:89] found id: ""
	I0429 20:10:45.001796   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:45.001845   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:45.007293   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:45.012864   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:45.012886   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:45.406875   66218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.862702626s)
	I0429 20:10:45.406957   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:45.424927   66218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:45.436628   66218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:45.447896   66218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:45.447921   66218 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:45.447970   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:45.458604   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:45.458662   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:45.469701   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:45.479738   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:45.479796   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:45.490097   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:45.500840   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:45.500903   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:45.512918   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:45.524679   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:45.524756   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:45.536044   66218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:45.598481   66218 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:10:45.598556   66218 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:10:45.783162   66218 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:10:45.783321   66218 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:10:45.783481   66218 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:10:46.079842   66218 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:10:46.081981   66218 out.go:204]   - Generating certificates and keys ...
	I0429 20:10:46.082084   66218 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:10:46.082174   66218 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:10:46.082295   66218 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:10:46.082382   66218 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:10:46.082485   66218 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:10:46.082578   66218 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:10:46.082694   66218 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:10:46.082793   66218 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:10:46.082906   66218 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:10:46.082976   66218 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:10:46.083009   66218 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:10:46.083070   66218 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:10:46.242368   66218 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:10:46.667998   66218 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:10:46.832801   66218 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:10:47.033146   66218 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:10:47.265305   66218 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:10:47.266631   66218 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:10:47.271057   66218 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:10:47.273021   66218 out.go:204]   - Booting up control plane ...
	I0429 20:10:47.273128   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:10:47.273245   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:10:47.273333   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:10:47.293530   66218 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:10:47.294487   66218 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:10:47.294564   66218 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:10:47.435669   66218 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:10:47.435802   66218 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:10:43.256983   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:45.257106   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:47.757018   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:45.564197   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:45.564231   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:45.635133   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:45.635168   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:45.779957   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:45.779992   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:45.827796   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:45.827828   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:45.870603   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:45.870636   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:45.935181   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:45.935220   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:46.007476   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:46.007518   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:46.071132   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:46.071169   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:46.130185   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:46.130218   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:46.148649   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:46.148684   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:46.196227   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:46.196266   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:46.245663   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:46.245707   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:48.789522   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:48.810752   66875 api_server.go:72] duration metric: took 4m14.399329979s to wait for apiserver process to appear ...
	I0429 20:10:48.810785   66875 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:10:48.810826   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:48.810921   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:48.868391   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:48.868415   66875 cri.go:89] found id: ""
	I0429 20:10:48.868424   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:48.868490   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.874253   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:48.874329   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:48.934057   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:48.934103   66875 cri.go:89] found id: ""
	I0429 20:10:48.934113   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:48.934173   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.940161   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:48.940244   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:48.992205   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:48.992227   66875 cri.go:89] found id: ""
	I0429 20:10:48.992234   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:48.992297   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.997496   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:48.997568   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:49.038579   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:49.038612   66875 cri.go:89] found id: ""
	I0429 20:10:49.038622   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:49.038683   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.045062   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:49.045129   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:49.084533   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:49.084561   66875 cri.go:89] found id: ""
	I0429 20:10:49.084570   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:49.084628   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.089601   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:49.089680   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:49.133281   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:49.133315   66875 cri.go:89] found id: ""
	I0429 20:10:49.133324   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:49.133387   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.140784   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:49.140889   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:49.201071   66875 cri.go:89] found id: ""
	I0429 20:10:49.201102   66875 logs.go:276] 0 containers: []
	W0429 20:10:49.201112   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:49.201117   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:49.201182   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:49.248708   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:49.248732   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:49.248738   66875 cri.go:89] found id: ""
	I0429 20:10:49.248747   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:49.248807   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.254131   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.259257   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:49.259287   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:49.325386   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:49.325417   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:49.371335   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:49.371365   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:49.414056   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:49.414112   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:49.469457   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:49.469493   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:49.523091   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:49.523123   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:49.581937   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:49.581977   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:49.599704   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:49.599738   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:49.738943   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:49.738984   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:49.814482   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:49.814521   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:50.306035   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:50.306084   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:50.371400   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:50.371485   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:50.426578   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:50.426613   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:48.438095   66218 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002489157s
	I0429 20:10:48.438230   66218 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:10:49.758262   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:52.256578   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:53.941848   66218 kubeadm.go:309] [api-check] The API server is healthy after 5.503491397s
	I0429 20:10:53.961404   66218 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:10:53.979792   66218 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:10:54.018524   66218 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:10:54.018776   66218 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-456788 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:10:54.037050   66218 kubeadm.go:309] [bootstrap-token] Using token: 793n05.pmfi0tdyn7q4x0lt
	I0429 20:10:54.038421   66218 out.go:204]   - Configuring RBAC rules ...
	I0429 20:10:54.038551   66218 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:10:54.045190   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:10:54.054625   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:10:54.060216   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:10:54.068878   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:10:54.073537   66218 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:10:54.355285   66218 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:10:54.800956   66218 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:10:55.352995   66218 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:10:55.353026   66218 kubeadm.go:309] 
	I0429 20:10:55.353135   66218 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:10:55.353158   66218 kubeadm.go:309] 
	I0429 20:10:55.353245   66218 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:10:55.353254   66218 kubeadm.go:309] 
	I0429 20:10:55.353290   66218 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:10:55.353382   66218 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:10:55.353456   66218 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:10:55.353467   66218 kubeadm.go:309] 
	I0429 20:10:55.353564   66218 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:10:55.353578   66218 kubeadm.go:309] 
	I0429 20:10:55.353637   66218 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:10:55.353648   66218 kubeadm.go:309] 
	I0429 20:10:55.353735   66218 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:10:55.353937   66218 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:10:55.354052   66218 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:10:55.354095   66218 kubeadm.go:309] 
	I0429 20:10:55.354216   66218 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:10:55.354334   66218 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:10:55.354348   66218 kubeadm.go:309] 
	I0429 20:10:55.354464   66218 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 793n05.pmfi0tdyn7q4x0lt \
	I0429 20:10:55.354615   66218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 20:10:55.354643   66218 kubeadm.go:309] 	--control-plane 
	I0429 20:10:55.354667   66218 kubeadm.go:309] 
	I0429 20:10:55.354799   66218 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:10:55.354810   66218 kubeadm.go:309] 
	I0429 20:10:55.354943   66218 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 793n05.pmfi0tdyn7q4x0lt \
	I0429 20:10:55.355111   66218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 20:10:55.355493   66218 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:10:55.355513   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:10:55.355520   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:10:55.357341   66218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:10:52.999575   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:10:53.005598   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 200:
	ok
	I0429 20:10:53.006923   66875 api_server.go:141] control plane version: v1.30.0
	I0429 20:10:53.006951   66875 api_server.go:131] duration metric: took 4.196158371s to wait for apiserver health ...
	I0429 20:10:53.006978   66875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:10:53.007011   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:53.007073   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:53.064156   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:53.064186   66875 cri.go:89] found id: ""
	I0429 20:10:53.064196   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:53.064256   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.069282   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:53.069361   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:53.128981   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:53.129016   66875 cri.go:89] found id: ""
	I0429 20:10:53.129025   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:53.129086   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.134680   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:53.134779   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:53.188828   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:53.188857   66875 cri.go:89] found id: ""
	I0429 20:10:53.188869   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:53.188922   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.195332   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:53.195401   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:53.245528   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:53.245548   66875 cri.go:89] found id: ""
	I0429 20:10:53.245556   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:53.245617   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.251849   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:53.251925   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:53.302914   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:53.302941   66875 cri.go:89] found id: ""
	I0429 20:10:53.302950   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:53.303004   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.308072   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:53.308138   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:53.358655   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:53.358684   66875 cri.go:89] found id: ""
	I0429 20:10:53.358693   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:53.358753   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.363796   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:53.363875   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:53.413543   66875 cri.go:89] found id: ""
	I0429 20:10:53.413573   66875 logs.go:276] 0 containers: []
	W0429 20:10:53.413586   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:53.413593   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:53.413651   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:53.457365   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:53.457393   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:53.457399   66875 cri.go:89] found id: ""
	I0429 20:10:53.457409   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:53.457473   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.464321   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.469358   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:53.469377   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:53.605546   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:53.605594   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:53.682788   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:53.682837   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:53.725985   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:53.726017   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:53.775864   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:53.775890   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:53.834762   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:53.834801   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:53.853796   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:53.853830   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:53.915651   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:53.915680   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:53.968857   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:53.968885   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:54.024061   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:54.024090   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:54.079637   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:54.079674   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:54.129296   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:54.129325   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:54.499803   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:54.499861   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:57.070245   66875 system_pods.go:59] 8 kube-system pods found
	I0429 20:10:57.070288   66875 system_pods.go:61] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running
	I0429 20:10:57.070296   66875 system_pods.go:61] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running
	I0429 20:10:57.070302   66875 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running
	I0429 20:10:57.070308   66875 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running
	I0429 20:10:57.070313   66875 system_pods.go:61] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running
	I0429 20:10:57.070318   66875 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running
	I0429 20:10:57.070329   66875 system_pods.go:61] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:10:57.070339   66875 system_pods.go:61] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running
	I0429 20:10:57.070353   66875 system_pods.go:74] duration metric: took 4.063366088s to wait for pod list to return data ...
	I0429 20:10:57.070366   66875 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:10:57.077008   66875 default_sa.go:45] found service account: "default"
	I0429 20:10:57.077031   66875 default_sa.go:55] duration metric: took 6.655489ms for default service account to be created ...
	I0429 20:10:57.077040   66875 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:10:57.087665   66875 system_pods.go:86] 8 kube-system pods found
	I0429 20:10:57.087695   66875 system_pods.go:89] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running
	I0429 20:10:57.087701   66875 system_pods.go:89] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running
	I0429 20:10:57.087707   66875 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running
	I0429 20:10:57.087711   66875 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running
	I0429 20:10:57.087715   66875 system_pods.go:89] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running
	I0429 20:10:57.087719   66875 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running
	I0429 20:10:57.087726   66875 system_pods.go:89] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:10:57.087730   66875 system_pods.go:89] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running
	I0429 20:10:57.087740   66875 system_pods.go:126] duration metric: took 10.694398ms to wait for k8s-apps to be running ...
	I0429 20:10:57.087749   66875 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:10:57.087794   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:57.106878   66875 system_svc.go:56] duration metric: took 19.118595ms WaitForService to wait for kubelet
	I0429 20:10:57.106917   66875 kubeadm.go:576] duration metric: took 4m22.695498557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:10:57.106945   66875 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:10:57.111052   66875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:10:57.111082   66875 node_conditions.go:123] node cpu capacity is 2
	I0429 20:10:57.111096   66875 node_conditions.go:105] duration metric: took 4.144283ms to run NodePressure ...
	I0429 20:10:57.111112   66875 start.go:240] waiting for startup goroutines ...
	I0429 20:10:57.111122   66875 start.go:245] waiting for cluster config update ...
	I0429 20:10:57.111141   66875 start.go:254] writing updated cluster config ...
	I0429 20:10:57.111536   66875 ssh_runner.go:195] Run: rm -f paused
	I0429 20:10:57.169536   66875 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:10:57.172347   66875 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-866143" cluster and "default" namespace by default
	I0429 20:10:55.358683   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:10:55.371397   66218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:10:55.397119   66218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:10:55.397192   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:55.397192   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-456788 minikube.k8s.io/updated_at=2024_04_29T20_10_55_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=no-preload-456788 minikube.k8s.io/primary=true
	I0429 20:10:55.605222   66218 ops.go:34] apiserver oom_adj: -16
	I0429 20:10:55.605588   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:56.106450   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:56.605894   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:57.105657   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:57.605823   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:54.258101   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:56.258336   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:58.106263   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:58.605675   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:59.106483   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:59.605671   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:00.105670   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:00.605695   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:01.106482   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:01.606206   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:02.106534   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:02.606372   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:58.756416   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:00.756875   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:02.756955   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:03.106555   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:03.606298   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.106227   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.606531   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:05.105708   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:05.605735   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:06.106556   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:06.606380   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:07.105690   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:07.605718   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.749964   65980 pod_ready.go:81] duration metric: took 4m0.000195525s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" ...
	E0429 20:11:04.749999   65980 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 20:11:04.750024   65980 pod_ready.go:38] duration metric: took 4m6.211964949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:04.750053   65980 kubeadm.go:591] duration metric: took 4m17.268163648s to restartPrimaryControlPlane
	W0429 20:11:04.750123   65980 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:11:04.750156   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:11:08.106383   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:08.606498   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:08.726533   66218 kubeadm.go:1107] duration metric: took 13.329402445s to wait for elevateKubeSystemPrivileges
	W0429 20:11:08.726584   66218 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:11:08.726596   66218 kubeadm.go:393] duration metric: took 5m14.838913251s to StartCluster
	I0429 20:11:08.726617   66218 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:08.726706   66218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:11:08.729364   66218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:08.730202   66218 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:11:08.731600   66218 out.go:177] * Verifying Kubernetes components...
	I0429 20:11:08.730245   66218 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:11:08.730446   66218 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:11:08.733479   66218 addons.go:69] Setting storage-provisioner=true in profile "no-preload-456788"
	I0429 20:11:08.733509   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:11:08.733518   66218 addons.go:69] Setting default-storageclass=true in profile "no-preload-456788"
	I0429 20:11:08.733540   66218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-456788"
	I0429 20:11:08.733514   66218 addons.go:234] Setting addon storage-provisioner=true in "no-preload-456788"
	W0429 20:11:08.733641   66218 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:11:08.733674   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.733963   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.733988   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.734081   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.734079   66218 addons.go:69] Setting metrics-server=true in profile "no-preload-456788"
	I0429 20:11:08.734106   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.734117   66218 addons.go:234] Setting addon metrics-server=true in "no-preload-456788"
	W0429 20:11:08.734126   66218 addons.go:243] addon metrics-server should already be in state true
	I0429 20:11:08.734154   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.734503   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.734536   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.754451   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0429 20:11:08.754650   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0429 20:11:08.754827   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I0429 20:11:08.755114   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755237   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755332   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755884   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.755905   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756031   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.756048   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.756050   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756062   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756456   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756477   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756513   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756853   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.757231   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.757254   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.757256   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.757291   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.761534   66218 addons.go:234] Setting addon default-storageclass=true in "no-preload-456788"
	W0429 20:11:08.761551   66218 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:11:08.761574   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.761857   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.761894   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.776659   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0429 20:11:08.776838   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0429 20:11:08.777067   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.777462   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.777643   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.777657   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.778152   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.778162   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.778170   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.778371   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.778845   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.778901   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0429 20:11:08.779220   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.779415   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.779446   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.779621   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.779634   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.780051   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.780246   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.780506   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.782432   66218 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:11:08.783809   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:11:08.783825   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:11:08.783843   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.782370   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.786004   66218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:11:08.787488   66218 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:08.787506   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:11:08.787663   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.788245   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.788290   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.788308   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.788381   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.788632   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.788834   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.788985   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:08.791587   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.791964   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.792052   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.792293   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.792477   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.792614   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.792712   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:08.798944   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I0429 20:11:08.799562   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.800224   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.800243   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.800790   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.801008   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.803220   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.803519   66218 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:08.803534   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:11:08.803552   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.806797   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.807216   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.807244   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.807540   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.807986   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.808170   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.808313   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:09.006753   66218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:11:09.038156   66218 node_ready.go:35] waiting up to 6m0s for node "no-preload-456788" to be "Ready" ...
	I0429 20:11:09.051516   66218 node_ready.go:49] node "no-preload-456788" has status "Ready":"True"
	I0429 20:11:09.051545   66218 node_ready.go:38] duration metric: took 13.34705ms for node "no-preload-456788" to be "Ready" ...
	I0429 20:11:09.051557   66218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:09.064032   66218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:09.308339   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:09.308749   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:11:09.308773   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:11:09.309961   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:09.347829   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:11:09.347860   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:11:09.466683   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:11:09.466718   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:11:09.678800   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:11:09.718867   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.718899   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.719248   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.719276   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.719273   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:09.719288   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.719296   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.719553   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.719574   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.719581   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:09.726177   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.726204   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.726527   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.726544   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.726590   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:10.570942   66218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.260944092s)
	I0429 20:11:10.571001   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.571012   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.571480   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.571504   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.571520   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.571528   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.571792   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:10.571818   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.571833   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.912211   66218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.233359134s)
	I0429 20:11:10.912282   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.912298   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.912746   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.912769   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.912779   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.912787   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.913055   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.913108   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.913132   66218 addons.go:470] Verifying addon metrics-server=true in "no-preload-456788"
	I0429 20:11:10.916694   66218 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0429 20:11:10.918273   66218 addons.go:505] duration metric: took 2.188028967s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0429 20:11:11.108067   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.108091   66218 pod_ready.go:81] duration metric: took 2.044032617s for pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.108103   66218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.115163   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.115196   66218 pod_ready.go:81] duration metric: took 7.084503ms for pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.115210   66218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.129264   66218 pod_ready.go:92] pod "etcd-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.129286   66218 pod_ready.go:81] duration metric: took 14.068541ms for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.129297   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.148114   66218 pod_ready.go:92] pod "kube-apiserver-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.148142   66218 pod_ready.go:81] duration metric: took 18.837962ms for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.148155   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.157985   66218 pod_ready.go:92] pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.158006   66218 pod_ready.go:81] duration metric: took 9.844321ms for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.158016   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6m95d" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.469680   66218 pod_ready.go:92] pod "kube-proxy-6m95d" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.469701   66218 pod_ready.go:81] duration metric: took 311.678646ms for pod "kube-proxy-6m95d" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.469710   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.868513   66218 pod_ready.go:92] pod "kube-scheduler-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.868539   66218 pod_ready.go:81] duration metric: took 398.821528ms for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.868550   66218 pod_ready.go:38] duration metric: took 2.816983409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:11.868569   66218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:11:11.868632   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:11:11.885115   66218 api_server.go:72] duration metric: took 3.154873937s to wait for apiserver process to appear ...
	I0429 20:11:11.885146   66218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:11:11.885169   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:11:11.890715   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0429 20:11:11.891649   66218 api_server.go:141] control plane version: v1.30.0
	I0429 20:11:11.891671   66218 api_server.go:131] duration metric: took 6.518818ms to wait for apiserver health ...
	I0429 20:11:11.891679   66218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:11:12.072142   66218 system_pods.go:59] 9 kube-system pods found
	I0429 20:11:12.072175   66218 system_pods.go:61] "coredns-7db6d8ff4d-hcfbq" [c0b53824-478e-4523-ada4-1cd7ba306c81] Running
	I0429 20:11:12.072183   66218 system_pods.go:61] "coredns-7db6d8ff4d-pvhwv" [f38ee7b3-53fe-4609-9b2b-000f55de5d5c] Running
	I0429 20:11:12.072188   66218 system_pods.go:61] "etcd-no-preload-456788" [b0629d4c-643a-485d-aa85-33fe009fff50] Running
	I0429 20:11:12.072194   66218 system_pods.go:61] "kube-apiserver-no-preload-456788" [e56edf5c-9883-4cd9-abab-09902048f584] Running
	I0429 20:11:12.072200   66218 system_pods.go:61] "kube-controller-manager-no-preload-456788" [bfaf44f0-da19-4cec-bec9-d9917cb8a571] Running
	I0429 20:11:12.072205   66218 system_pods.go:61] "kube-proxy-6m95d" [25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7] Running
	I0429 20:11:12.072209   66218 system_pods.go:61] "kube-scheduler-no-preload-456788" [de4f90f7-05d6-4755-a4c0-2c522f7fe88c] Running
	I0429 20:11:12.072217   66218 system_pods.go:61] "metrics-server-569cc877fc-sxgwr" [046d28fe-d51e-43ba-9550-d1d7e33d9d84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:11:12.072224   66218 system_pods.go:61] "storage-provisioner" [fd1c4813-8889-4f21-b21e-6007eaa163a6] Running
	I0429 20:11:12.072247   66218 system_pods.go:74] duration metric: took 180.561509ms to wait for pod list to return data ...
	I0429 20:11:12.072256   66218 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:11:12.267637   66218 default_sa.go:45] found service account: "default"
	I0429 20:11:12.267663   66218 default_sa.go:55] duration metric: took 195.398841ms for default service account to be created ...
	I0429 20:11:12.267677   66218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:11:12.471933   66218 system_pods.go:86] 9 kube-system pods found
	I0429 20:11:12.471967   66218 system_pods.go:89] "coredns-7db6d8ff4d-hcfbq" [c0b53824-478e-4523-ada4-1cd7ba306c81] Running
	I0429 20:11:12.471975   66218 system_pods.go:89] "coredns-7db6d8ff4d-pvhwv" [f38ee7b3-53fe-4609-9b2b-000f55de5d5c] Running
	I0429 20:11:12.471981   66218 system_pods.go:89] "etcd-no-preload-456788" [b0629d4c-643a-485d-aa85-33fe009fff50] Running
	I0429 20:11:12.471987   66218 system_pods.go:89] "kube-apiserver-no-preload-456788" [e56edf5c-9883-4cd9-abab-09902048f584] Running
	I0429 20:11:12.471994   66218 system_pods.go:89] "kube-controller-manager-no-preload-456788" [bfaf44f0-da19-4cec-bec9-d9917cb8a571] Running
	I0429 20:11:12.471999   66218 system_pods.go:89] "kube-proxy-6m95d" [25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7] Running
	I0429 20:11:12.472008   66218 system_pods.go:89] "kube-scheduler-no-preload-456788" [de4f90f7-05d6-4755-a4c0-2c522f7fe88c] Running
	I0429 20:11:12.472020   66218 system_pods.go:89] "metrics-server-569cc877fc-sxgwr" [046d28fe-d51e-43ba-9550-d1d7e33d9d84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:11:12.472027   66218 system_pods.go:89] "storage-provisioner" [fd1c4813-8889-4f21-b21e-6007eaa163a6] Running
	I0429 20:11:12.472039   66218 system_pods.go:126] duration metric: took 204.355515ms to wait for k8s-apps to be running ...
	I0429 20:11:12.472052   66218 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:11:12.472110   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:11:12.487748   66218 system_svc.go:56] duration metric: took 15.68796ms WaitForService to wait for kubelet
	I0429 20:11:12.487779   66218 kubeadm.go:576] duration metric: took 3.757538662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:11:12.487804   66218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:11:12.668597   66218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:11:12.668619   66218 node_conditions.go:123] node cpu capacity is 2
	I0429 20:11:12.668629   66218 node_conditions.go:105] duration metric: took 180.819727ms to run NodePressure ...
	I0429 20:11:12.668640   66218 start.go:240] waiting for startup goroutines ...
	I0429 20:11:12.668646   66218 start.go:245] waiting for cluster config update ...
	I0429 20:11:12.668656   66218 start.go:254] writing updated cluster config ...
	I0429 20:11:12.668905   66218 ssh_runner.go:195] Run: rm -f paused
	I0429 20:11:12.718997   66218 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:11:12.720757   66218 out.go:177] * Done! kubectl is now configured to use "no-preload-456788" cluster and "default" namespace by default
	I0429 20:11:37.819019   65980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.068841912s)
	I0429 20:11:37.819092   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:11:37.836850   65980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:11:37.849684   65980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:11:37.861597   65980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:11:37.861626   65980 kubeadm.go:156] found existing configuration files:
	
	I0429 20:11:37.861674   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:11:37.872799   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:11:37.872860   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:11:37.884336   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:11:37.895124   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:11:37.895181   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:11:37.906874   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:11:37.917482   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:11:37.917530   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:11:37.928137   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:11:37.938698   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:11:37.938750   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:11:37.949658   65980 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:11:38.159358   65980 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:11:46.848042   65980 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:11:46.848108   65980 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:11:46.848169   65980 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:11:46.848308   65980 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:11:46.848447   65980 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:11:46.848531   65980 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:11:46.850368   65980 out.go:204]   - Generating certificates and keys ...
	I0429 20:11:46.850444   65980 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:11:46.850496   65980 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:11:46.850580   65980 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:11:46.850649   65980 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:11:46.850742   65980 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:11:46.850850   65980 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:11:46.850949   65980 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:11:46.851018   65980 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:11:46.851117   65980 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:11:46.851201   65980 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:11:46.851263   65980 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:11:46.851327   65980 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:11:46.851395   65980 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:11:46.851466   65980 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:11:46.851513   65980 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:11:46.851605   65980 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:11:46.851690   65980 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:11:46.851791   65980 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:11:46.851878   65980 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:11:46.853420   65980 out.go:204]   - Booting up control plane ...
	I0429 20:11:46.853526   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:11:46.853617   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:11:46.853696   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:11:46.853791   65980 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:11:46.853866   65980 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:11:46.853900   65980 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:11:46.854010   65980 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:11:46.854094   65980 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:11:46.854148   65980 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.976221ms
	I0429 20:11:46.854240   65980 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:11:46.854311   65980 kubeadm.go:309] [api-check] The API server is healthy after 5.50298765s
	I0429 20:11:46.854407   65980 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:11:46.854509   65980 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:11:46.854565   65980 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:11:46.854726   65980 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-161370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:11:46.854783   65980 kubeadm.go:309] [bootstrap-token] Using token: 93xwhj.zowa67wvl54p1iru
	I0429 20:11:46.856308   65980 out.go:204]   - Configuring RBAC rules ...
	I0429 20:11:46.856452   65980 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:11:46.856561   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:11:46.856736   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:11:46.856867   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:11:46.857018   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:11:46.857140   65980 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:11:46.857294   65980 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:11:46.857358   65980 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:11:46.857419   65980 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:11:46.857428   65980 kubeadm.go:309] 
	I0429 20:11:46.857502   65980 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:11:46.857514   65980 kubeadm.go:309] 
	I0429 20:11:46.857606   65980 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:11:46.857617   65980 kubeadm.go:309] 
	I0429 20:11:46.857649   65980 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:11:46.857725   65980 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:11:46.857797   65980 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:11:46.857806   65980 kubeadm.go:309] 
	I0429 20:11:46.857880   65980 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:11:46.857889   65980 kubeadm.go:309] 
	I0429 20:11:46.857947   65980 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:11:46.857955   65980 kubeadm.go:309] 
	I0429 20:11:46.858020   65980 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:11:46.858125   65980 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:11:46.858216   65980 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:11:46.858224   65980 kubeadm.go:309] 
	I0429 20:11:46.858325   65980 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:11:46.858433   65980 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:11:46.858442   65980 kubeadm.go:309] 
	I0429 20:11:46.858553   65980 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 93xwhj.zowa67wvl54p1iru \
	I0429 20:11:46.858696   65980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 20:11:46.858722   65980 kubeadm.go:309] 	--control-plane 
	I0429 20:11:46.858728   65980 kubeadm.go:309] 
	I0429 20:11:46.858797   65980 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:11:46.858803   65980 kubeadm.go:309] 
	I0429 20:11:46.858881   65980 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 93xwhj.zowa67wvl54p1iru \
	I0429 20:11:46.859014   65980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 20:11:46.859025   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:11:46.859034   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:11:46.861619   65980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:11:46.863111   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:11:46.875965   65980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:11:46.897147   65980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:11:46.897225   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:46.897238   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-161370 minikube.k8s.io/updated_at=2024_04_29T20_11_46_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=embed-certs-161370 minikube.k8s.io/primary=true
	I0429 20:11:46.927555   65980 ops.go:34] apiserver oom_adj: -16
	I0429 20:11:47.119594   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:47.620640   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:48.119974   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:48.620618   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:49.120107   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:49.620349   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:50.120180   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:50.620533   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:51.120332   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:51.620669   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:52.119922   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:52.620467   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:53.120486   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:53.620314   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:54.120159   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:54.620430   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:55.119995   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:55.620496   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:56.120152   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:56.620390   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:57.120090   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:57.619671   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:58.120549   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:58.620334   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.120532   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.619732   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.765502   65980 kubeadm.go:1107] duration metric: took 12.868344365s to wait for elevateKubeSystemPrivileges
	W0429 20:11:59.765550   65980 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:11:59.765561   65980 kubeadm.go:393] duration metric: took 5m12.339650014s to StartCluster
	I0429 20:11:59.765582   65980 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:59.765671   65980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:11:59.767924   65980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:59.768253   65980 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:11:59.769950   65980 out.go:177] * Verifying Kubernetes components...
	I0429 20:11:59.768323   65980 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:11:59.768433   65980 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:11:59.771281   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:11:59.771300   65980 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-161370"
	I0429 20:11:59.771313   65980 addons.go:69] Setting default-storageclass=true in profile "embed-certs-161370"
	I0429 20:11:59.771332   65980 addons.go:69] Setting metrics-server=true in profile "embed-certs-161370"
	I0429 20:11:59.771344   65980 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-161370"
	W0429 20:11:59.771355   65980 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:11:59.771361   65980 addons.go:234] Setting addon metrics-server=true in "embed-certs-161370"
	W0429 20:11:59.771370   65980 addons.go:243] addon metrics-server should already be in state true
	I0429 20:11:59.771399   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.771401   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.771354   65980 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-161370"
	I0429 20:11:59.771757   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771768   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771772   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771783   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.771786   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.771788   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.787359   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0429 20:11:59.787384   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0429 20:11:59.787503   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46153
	I0429 20:11:59.787764   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.787987   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.788069   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.788254   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788273   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.788708   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788724   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.788773   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.788832   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788852   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.789102   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.789117   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.789267   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.789478   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.789510   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.790170   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.790220   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.792108   65980 addons.go:234] Setting addon default-storageclass=true in "embed-certs-161370"
	W0429 20:11:59.792127   65980 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:11:59.792154   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.792386   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.792424   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.808581   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35659
	I0429 20:11:59.808924   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44943
	I0429 20:11:59.808943   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.809461   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.809481   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.809561   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.809791   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.810335   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.810357   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.810976   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.810992   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.811324   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.811604   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0429 20:11:59.811758   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.812141   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.812592   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.812610   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.813130   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.813351   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.813614   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.815589   65980 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:11:59.817004   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:11:59.817014   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:11:59.817027   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.815020   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.818585   65980 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:11:59.820110   65980 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:59.820125   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:11:59.820140   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.819840   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.820305   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.820333   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.820563   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.820722   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.820874   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.820998   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.822849   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.823299   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.823323   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.823460   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.823599   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.823924   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.824039   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.827552   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0429 20:11:59.827976   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.828369   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.828389   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.828754   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.828921   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.830295   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.830566   65980 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:59.830578   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:11:59.830590   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.833174   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.833526   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.833545   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.833759   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.833910   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.834029   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.834166   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.978978   65980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:11:59.995547   65980 node_ready.go:35] waiting up to 6m0s for node "embed-certs-161370" to be "Ready" ...
	I0429 20:12:00.003802   65980 node_ready.go:49] node "embed-certs-161370" has status "Ready":"True"
	I0429 20:12:00.003823   65980 node_ready.go:38] duration metric: took 8.245639ms for node "embed-certs-161370" to be "Ready" ...
	I0429 20:12:00.003833   65980 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:12:00.010487   65980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:00.072627   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:12:00.075716   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:12:00.177043   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:12:00.177069   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:12:00.278082   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:12:00.278112   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:12:00.311731   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:12:00.311756   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:12:00.369982   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:12:00.642840   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.642865   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643084   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.643109   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643227   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.643240   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.643248   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.643256   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643374   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.645085   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645103   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.645112   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.645121   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.645196   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645228   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.645231   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.645331   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645343   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.658929   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.658955   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.659236   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.659267   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.659281   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.103183   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:01.103207   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:01.103488   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:01.103542   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.103557   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:01.103541   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:01.103584   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:01.105440   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:01.105461   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.105473   65980 addons.go:470] Verifying addon metrics-server=true in "embed-certs-161370"
	I0429 20:12:01.107435   65980 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0429 20:12:01.109051   65980 addons.go:505] duration metric: took 1.340729876s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0429 20:12:02.029772   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace has status "Ready":"False"
	I0429 20:12:02.520396   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.520417   65980 pod_ready.go:81] duration metric: took 2.509903724s for pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.520426   65980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.529115   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.529141   65980 pod_ready.go:81] duration metric: took 8.707165ms for pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.529153   65980 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.539459   65980 pod_ready.go:92] pod "etcd-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.539478   65980 pod_ready.go:81] duration metric: took 10.318294ms for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.539489   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.543813   65980 pod_ready.go:92] pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.543830   65980 pod_ready.go:81] duration metric: took 4.333619ms for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.543839   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.549343   65980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.549363   65980 pod_ready.go:81] duration metric: took 5.516323ms for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.549374   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wq48j" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.915209   65980 pod_ready.go:92] pod "kube-proxy-wq48j" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.915232   65980 pod_ready.go:81] duration metric: took 365.851814ms for pod "kube-proxy-wq48j" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.915240   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:03.315564   65980 pod_ready.go:92] pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:03.315587   65980 pod_ready.go:81] duration metric: took 400.340876ms for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:03.315595   65980 pod_ready.go:38] duration metric: took 3.311752591s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:12:03.315609   65980 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:12:03.315655   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:12:03.333491   65980 api_server.go:72] duration metric: took 3.565200855s to wait for apiserver process to appear ...
	I0429 20:12:03.333521   65980 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:12:03.333538   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:12:03.338822   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0429 20:12:03.339975   65980 api_server.go:141] control plane version: v1.30.0
	I0429 20:12:03.339995   65980 api_server.go:131] duration metric: took 6.468233ms to wait for apiserver health ...
	I0429 20:12:03.340002   65980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:12:03.519016   65980 system_pods.go:59] 9 kube-system pods found
	I0429 20:12:03.519042   65980 system_pods.go:61] "coredns-7db6d8ff4d-7z6zv" [422451a2-615d-4bf8-8de8-d5fa5805219f] Running
	I0429 20:12:03.519047   65980 system_pods.go:61] "coredns-7db6d8ff4d-rr6bd" [6d14ff20-6dab-4c02-b91c-0a1e326f1593] Running
	I0429 20:12:03.519050   65980 system_pods.go:61] "etcd-embed-certs-161370" [ab19e79c-18bd-4d0d-b5cf-639453495383] Running
	I0429 20:12:03.519055   65980 system_pods.go:61] "kube-apiserver-embed-certs-161370" [6091dd0a-333d-4729-97db-eb7a30755db4] Running
	I0429 20:12:03.519059   65980 system_pods.go:61] "kube-controller-manager-embed-certs-161370" [de70d57c-9329-4d37-a838-9c9ae1e41871] Running
	I0429 20:12:03.519061   65980 system_pods.go:61] "kube-proxy-wq48j" [3b3b23ef-b5b4-4754-bc44-73e1d51a18d7] Running
	I0429 20:12:03.519065   65980 system_pods.go:61] "kube-scheduler-embed-certs-161370" [c7fd3d36-4e35-43b2-93e7-45129464937d] Running
	I0429 20:12:03.519071   65980 system_pods.go:61] "metrics-server-569cc877fc-x2wb6" [cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:12:03.519075   65980 system_pods.go:61] "storage-provisioner" [93e046a1-3867-44e1-8a4f-cf0eba6dfd6b] Running
	I0429 20:12:03.519082   65980 system_pods.go:74] duration metric: took 179.075384ms to wait for pod list to return data ...
	I0429 20:12:03.519089   65980 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:12:03.714354   65980 default_sa.go:45] found service account: "default"
	I0429 20:12:03.714384   65980 default_sa.go:55] duration metric: took 195.287433ms for default service account to be created ...
	I0429 20:12:03.714395   65980 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:12:03.918729   65980 system_pods.go:86] 9 kube-system pods found
	I0429 20:12:03.918755   65980 system_pods.go:89] "coredns-7db6d8ff4d-7z6zv" [422451a2-615d-4bf8-8de8-d5fa5805219f] Running
	I0429 20:12:03.918760   65980 system_pods.go:89] "coredns-7db6d8ff4d-rr6bd" [6d14ff20-6dab-4c02-b91c-0a1e326f1593] Running
	I0429 20:12:03.918765   65980 system_pods.go:89] "etcd-embed-certs-161370" [ab19e79c-18bd-4d0d-b5cf-639453495383] Running
	I0429 20:12:03.918769   65980 system_pods.go:89] "kube-apiserver-embed-certs-161370" [6091dd0a-333d-4729-97db-eb7a30755db4] Running
	I0429 20:12:03.918773   65980 system_pods.go:89] "kube-controller-manager-embed-certs-161370" [de70d57c-9329-4d37-a838-9c9ae1e41871] Running
	I0429 20:12:03.918777   65980 system_pods.go:89] "kube-proxy-wq48j" [3b3b23ef-b5b4-4754-bc44-73e1d51a18d7] Running
	I0429 20:12:03.918780   65980 system_pods.go:89] "kube-scheduler-embed-certs-161370" [c7fd3d36-4e35-43b2-93e7-45129464937d] Running
	I0429 20:12:03.918787   65980 system_pods.go:89] "metrics-server-569cc877fc-x2wb6" [cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:12:03.918791   65980 system_pods.go:89] "storage-provisioner" [93e046a1-3867-44e1-8a4f-cf0eba6dfd6b] Running
	I0429 20:12:03.918800   65980 system_pods.go:126] duration metric: took 204.399385ms to wait for k8s-apps to be running ...
	I0429 20:12:03.918809   65980 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:12:03.918851   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:03.937870   65980 system_svc.go:56] duration metric: took 19.05503ms WaitForService to wait for kubelet
	I0429 20:12:03.937892   65980 kubeadm.go:576] duration metric: took 4.169607456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:12:03.937910   65980 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:12:04.116479   65980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:12:04.116504   65980 node_conditions.go:123] node cpu capacity is 2
	I0429 20:12:04.116513   65980 node_conditions.go:105] duration metric: took 178.599246ms to run NodePressure ...
	I0429 20:12:04.116524   65980 start.go:240] waiting for startup goroutines ...
	I0429 20:12:04.116530   65980 start.go:245] waiting for cluster config update ...
	I0429 20:12:04.116540   65980 start.go:254] writing updated cluster config ...
	I0429 20:12:04.116799   65980 ssh_runner.go:195] Run: rm -f paused
	I0429 20:12:04.167803   65980 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:12:04.169861   65980 out.go:177] * Done! kubectl is now configured to use "embed-certs-161370" cluster and "default" namespace by default
	I0429 20:12:09.853929   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:12:09.854036   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:12:09.856141   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:12:09.856215   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:12:09.856314   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:12:09.856435   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:12:09.856529   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:12:09.856638   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:12:09.858658   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:12:09.858759   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:12:09.858821   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:12:09.858914   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:12:09.858967   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:12:09.859049   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:12:09.859118   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:12:09.859197   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:12:09.859311   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:12:09.859435   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:12:09.859548   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:12:09.859605   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:12:09.859678   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:12:09.859766   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:12:09.859856   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:12:09.859947   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:12:09.860025   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:12:09.860149   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:12:09.860228   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:12:09.860289   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:12:09.860390   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:12:09.862098   66615 out.go:204]   - Booting up control plane ...
	I0429 20:12:09.862211   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:12:09.862298   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:12:09.862360   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:12:09.862484   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:12:09.862720   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:12:09.862794   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:12:09.862882   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863117   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863244   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863470   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863544   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863814   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863895   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864144   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864223   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864393   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864408   66615 kubeadm.go:309] 
	I0429 20:12:09.864473   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:12:09.864526   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:12:09.864543   66615 kubeadm.go:309] 
	I0429 20:12:09.864589   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:12:09.864638   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:12:09.864779   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:12:09.864789   66615 kubeadm.go:309] 
	I0429 20:12:09.864911   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:12:09.864971   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:12:09.865026   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:12:09.865033   66615 kubeadm.go:309] 
	I0429 20:12:09.865150   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:12:09.865228   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:12:09.865241   66615 kubeadm.go:309] 
	I0429 20:12:09.865404   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:12:09.865538   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:12:09.865651   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:12:09.865755   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:12:09.865828   66615 kubeadm.go:309] 
	W0429 20:12:09.865940   66615 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0429 20:12:09.866027   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:12:10.987703   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.121642991s)
	I0429 20:12:10.987802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:11.007295   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:12:11.020772   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:12:11.020790   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:12:11.020838   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:12:11.033334   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:12:11.033405   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:12:11.044565   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:12:11.057087   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:12:11.057143   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:12:11.069908   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.082866   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:12:11.082920   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.096659   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:12:11.110106   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:12:11.110166   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:12:11.124952   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:12:11.396252   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:14:07.831448   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:14:07.831556   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:14:07.833111   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:14:07.833179   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:14:07.833288   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:14:07.833421   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:14:07.833530   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:14:07.833616   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:14:07.835518   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:14:07.835623   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:14:07.835703   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:14:07.835776   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:14:07.835839   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:14:07.835893   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:14:07.835957   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:14:07.836039   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:14:07.836129   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:14:07.836238   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:14:07.836350   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:14:07.836394   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:14:07.836441   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:14:07.836488   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:14:07.836559   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:14:07.836637   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:14:07.836683   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:14:07.836778   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:14:07.836854   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:14:07.836895   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:14:07.836950   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:14:07.838553   66615 out.go:204]   - Booting up control plane ...
	I0429 20:14:07.838635   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:14:07.838718   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:14:07.838836   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:14:07.838918   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:14:07.839069   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:14:07.839126   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:14:07.839180   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839369   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839450   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839654   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839779   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840008   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840076   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840322   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840380   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840571   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840594   66615 kubeadm.go:309] 
	I0429 20:14:07.840637   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:14:07.840673   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:14:07.840682   66615 kubeadm.go:309] 
	I0429 20:14:07.840715   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:14:07.840745   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:14:07.840844   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:14:07.840857   66615 kubeadm.go:309] 
	I0429 20:14:07.840969   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:14:07.841022   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:14:07.841073   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:14:07.841083   66615 kubeadm.go:309] 
	I0429 20:14:07.841184   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:14:07.841315   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:14:07.841325   66615 kubeadm.go:309] 
	I0429 20:14:07.841454   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:14:07.841550   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:14:07.841632   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:14:07.841697   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:14:07.841760   66615 kubeadm.go:393] duration metric: took 8m1.501853767s to StartCluster
	I0429 20:14:07.841781   66615 kubeadm.go:309] 
	I0429 20:14:07.841800   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:14:07.841853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:14:07.898194   66615 cri.go:89] found id: ""
	I0429 20:14:07.898227   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.898237   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:14:07.898244   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:14:07.898316   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:14:07.938873   66615 cri.go:89] found id: ""
	I0429 20:14:07.938903   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.938914   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:14:07.938921   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:14:07.938979   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:14:07.980523   66615 cri.go:89] found id: ""
	I0429 20:14:07.980551   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.980559   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:14:07.980565   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:14:07.980612   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:14:08.021334   66615 cri.go:89] found id: ""
	I0429 20:14:08.021366   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.021377   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:14:08.021389   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:14:08.021446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:14:08.060598   66615 cri.go:89] found id: ""
	I0429 20:14:08.060636   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.060648   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:14:08.060655   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:14:08.060716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:14:08.101689   66615 cri.go:89] found id: ""
	I0429 20:14:08.101715   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.101723   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:14:08.101729   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:14:08.101786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:14:08.143295   66615 cri.go:89] found id: ""
	I0429 20:14:08.143333   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.143344   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:14:08.143351   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:14:08.143408   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:14:08.190555   66615 cri.go:89] found id: ""
	I0429 20:14:08.190585   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.190597   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:14:08.190609   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:14:08.190624   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:14:08.251830   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:14:08.251870   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:14:08.306512   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:14:08.306554   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:14:08.323258   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:14:08.323283   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:14:08.405539   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:14:08.405568   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:14:08.405583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0429 20:14:08.514288   66615 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0429 20:14:08.514344   66615 out.go:239] * 
	W0429 20:14:08.514431   66615 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.514465   66615 out.go:239] * 
	W0429 20:14:08.515399   66615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:14:08.518578   66615 out.go:177] 
	W0429 20:14:08.519725   66615 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.519782   66615 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0429 20:14:08.519816   66615 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0429 20:14:08.521068   66615 out.go:177] 
	
	
	==> CRI-O <==
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.845515054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422193845480584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40e21010-1263-4b5a-9545-b873b0697ee6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.846673770Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73db2738-1c03-4fad-87bf-c03e08eec232 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.846781070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73db2738-1c03-4fad-87bf-c03e08eec232 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.846844648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=73db2738-1c03-4fad-87bf-c03e08eec232 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.891171002Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69a240bc-7cc4-4ad2-88b7-b3cf9ac2cb8e name=/runtime.v1.RuntimeService/Version
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.891312729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69a240bc-7cc4-4ad2-88b7-b3cf9ac2cb8e name=/runtime.v1.RuntimeService/Version
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.892848549Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1b1bc8f-2df9-455e-b348-ed9f1b62321a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.893681633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422193893646398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1b1bc8f-2df9-455e-b348-ed9f1b62321a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.894863381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69fa488f-1220-4677-8086-91e8bad192dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.895060930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69fa488f-1220-4677-8086-91e8bad192dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.895117474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=69fa488f-1220-4677-8086-91e8bad192dd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.934219392Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38c72191-f814-4e20-93b0-764be25f55d6 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.934321215Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38c72191-f814-4e20-93b0-764be25f55d6 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.936537821Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31667679-9426-446d-8cad-51a8683aad83 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.937234739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422193937204667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31667679-9426-446d-8cad-51a8683aad83 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.938120457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7be1774-f568-4aa4-8eec-a09123f84fcd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.938203116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7be1774-f568-4aa4-8eec-a09123f84fcd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.938237533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c7be1774-f568-4aa4-8eec-a09123f84fcd name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.981152211Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60a944ad-0bca-4a93-b010-b428c44fd615 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.981296047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60a944ad-0bca-4a93-b010-b428c44fd615 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.983289054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54dd5fab-bd05-4da9-9f6a-5a38346fda2f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.984057018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422193984021381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54dd5fab-bd05-4da9-9f6a-5a38346fda2f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.985213166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1aefbb99-9724-43b0-86a8-f9a71c28ad77 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.985328555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1aefbb99-9724-43b0-86a8-f9a71c28ad77 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:23:13 old-k8s-version-919612 crio[646]: time="2024-04-29 20:23:13.985381942Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1aefbb99-9724-43b0-86a8-f9a71c28ad77 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr29 20:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052789] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046548] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.710890] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.577556] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.715602] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.063950] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.064197] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076631] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.231967] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.183078] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.301851] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[Apr29 20:06] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.070853] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.488329] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[ +10.271232] kauditd_printk_skb: 46 callbacks suppressed
	[Apr29 20:10] systemd-fstab-generator[4978]: Ignoring "noauto" option for root device
	[Apr29 20:12] systemd-fstab-generator[5259]: Ignoring "noauto" option for root device
	[  +0.075523] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:23:14 up 17 min,  0 users,  load average: 0.23, 0.10, 0.07
	Linux old-k8s-version-919612 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]: goroutine 144 [runnable]:
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0000e5a40)
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]: goroutine 145 [select]:
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0001194f0, 0xc000b5c601, 0xc00096e700, 0xc000397a00, 0xc0005f0c80, 0xc0005f0c40)
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000b5c600, 0x0, 0x0)
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0000e5a40)
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Apr 29 20:23:10 old-k8s-version-919612 kubelet[6422]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Apr 29 20:23:10 old-k8s-version-919612 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 29 20:23:10 old-k8s-version-919612 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 29 20:23:11 old-k8s-version-919612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 113.
	Apr 29 20:23:11 old-k8s-version-919612 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 29 20:23:11 old-k8s-version-919612 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 29 20:23:11 old-k8s-version-919612 kubelet[6431]: I0429 20:23:11.395794    6431 server.go:416] Version: v1.20.0
	Apr 29 20:23:11 old-k8s-version-919612 kubelet[6431]: I0429 20:23:11.396136    6431 server.go:837] Client rotation is on, will bootstrap in background
	Apr 29 20:23:11 old-k8s-version-919612 kubelet[6431]: I0429 20:23:11.398052    6431 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 29 20:23:11 old-k8s-version-919612 kubelet[6431]: W0429 20:23:11.399112    6431 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 29 20:23:11 old-k8s-version-919612 kubelet[6431]: I0429 20:23:11.399198    6431 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-919612 -n old-k8s-version-919612
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 2 (235.763619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-919612" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (538.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-29 20:28:57.270178256 +0000 UTC m=+6586.917553380
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-866143 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-866143 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.626µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-866143 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-866143 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-866143 logs -n 25: (2.014186986s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo cat                           | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo cat                           | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo cat                           | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo docker                        | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo cat                           | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo cat                           | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo cat                           | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo cat                           | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo                               | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo find                          | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-870155 sudo crio                          | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-870155                                    | kindnet-870155            | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC | 29 Apr 24 20:28 UTC |
	| start   | -p enable-default-cni-870155                         | enable-default-cni-870155 | jenkins | v1.33.0 | 29 Apr 24 20:28 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:28:55
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:28:55.382813   78898 out.go:291] Setting OutFile to fd 1 ...
	I0429 20:28:55.383415   78898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:28:55.383434   78898 out.go:304] Setting ErrFile to fd 2...
	I0429 20:28:55.383442   78898 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:28:55.383942   78898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 20:28:55.384939   78898 out.go:298] Setting JSON to false
	I0429 20:28:55.386796   78898 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7833,"bootTime":1714414702,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 20:28:55.386880   78898 start.go:139] virtualization: kvm guest
	I0429 20:28:55.389116   78898 out.go:177] * [enable-default-cni-870155] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 20:28:55.390881   78898 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:28:55.390934   78898 notify.go:220] Checking for updates...
	I0429 20:28:55.392186   78898 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:28:55.393739   78898 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:28:55.396429   78898 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:28:55.398985   78898 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 20:28:55.400703   78898 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:28:55.402844   78898 config.go:182] Loaded profile config "calico-870155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:28:55.402992   78898 config.go:182] Loaded profile config "custom-flannel-870155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:28:55.403111   78898 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:28:55.403239   78898 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:28:55.451365   78898 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 20:28:55.452687   78898 start.go:297] selected driver: kvm2
	I0429 20:28:55.452707   78898 start.go:901] validating driver "kvm2" against <nil>
	I0429 20:28:55.452721   78898 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:28:55.453722   78898 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:28:55.453818   78898 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 20:28:55.473718   78898 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 20:28:55.473764   78898 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0429 20:28:55.473949   78898 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0429 20:28:55.473972   78898 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:28:55.474033   78898 cni.go:84] Creating CNI manager for "bridge"
	I0429 20:28:55.474047   78898 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 20:28:55.474130   78898 start.go:340] cluster config:
	{Name:enable-default-cni-870155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-870155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:28:55.474249   78898 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:28:55.475983   78898 out.go:177] * Starting "enable-default-cni-870155" primary control-plane node in "enable-default-cni-870155" cluster
	I0429 20:28:55.477226   78898 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:28:55.477264   78898 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 20:28:55.477277   78898 cache.go:56] Caching tarball of preloaded images
	I0429 20:28:55.477375   78898 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 20:28:55.477388   78898 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 20:28:55.477489   78898 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/enable-default-cni-870155/config.json ...
	I0429 20:28:55.477513   78898 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/enable-default-cni-870155/config.json: {Name:mk9ad284eaecb519bd3a49ce69fa7259241542fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:28:55.477669   78898 start.go:360] acquireMachinesLock for enable-default-cni-870155: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:28:55.477718   78898 start.go:364] duration metric: took 22.088µs to acquireMachinesLock for "enable-default-cni-870155"
	I0429 20:28:55.477749   78898 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-870155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.0 ClusterName:enable-default-cni-870155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:28:55.477843   78898 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 20:28:52.554735   77059 out.go:204]   - Booting up control plane ...
	I0429 20:28:52.554826   77059 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:28:52.555465   77059 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:28:52.556716   77059 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:28:52.593722   77059 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:28:52.593841   77059 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:28:52.593899   77059 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:28:52.757132   77059 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:28:52.757233   77059 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:28:53.758718   77059 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001919386s
	I0429 20:28:53.758844   77059 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:28:54.267314   76479 node_ready.go:53] node "calico-870155" has status "Ready":"False"
	I0429 20:28:56.761512   76479 node_ready.go:53] node "calico-870155" has status "Ready":"False"
	
	
	==> CRI-O <==
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.202774591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422538202741283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e324750-e7e2-4ab9-a5c2-7612024ab1dd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.203749658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f973b471-3979-481d-b11b-11c9cae9851b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.203931144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f973b471-3979-481d-b11b-11c9cae9851b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.204258551Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421222990630090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f1a04bc18d4b5afb60abd8f5cc2c1502fe9b02888477d81d21621cceed451c,PodSandboxId:6a08429e8c4823cbc29bf41bf26f56ab428639313edcad5037de9566d3a6983f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714421202883574425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60422741-64fe-4169-bdbd-384825776aef,},Annotations:map[string]string{io.kubernetes.container.hash: 8545d2cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52,PodSandboxId:e4a8e598d93b3af609a80df6b75698559b2b6e086a04706aec5ad4fbbf311ba8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421199795247389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7m65s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72397559-b0da-492a-be1c-297027021f50,},Annotations:map[string]string{io.kubernetes.container.hash: 51500de8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714421192155164718,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561,PodSandboxId:b8bf49dccc6d886bc7628b38f50835c95ec5329e881e094eac6e5b0fce75b52f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714421192089263486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zddtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47956c-26c1-48e2-8f42-a2a
81d201503,},Annotations:map[string]string{io.kubernetes.container.hash: b9b15c9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0,PodSandboxId:01b4b04f083a312f923e21ae7f5b4c1318fab64fd7f62482c873f8078d56022b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421187521686479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077414c522aee9483d3819d99
7b879c8,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f,PodSandboxId:34abaa6dac5ebedec40d5b604770433edb44465efaee911ec475837813e22cc7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421187485479944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7177a093ebd5743fc5b68cae5a3d2c0,},Annotations:map[string
]string{io.kubernetes.container.hash: cf1ccb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552,PodSandboxId:62b9000d26f2d365735496701ac01757eb9ee92273cb805b8499089443a85493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421187431442344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960e82e54b5cb1fc11c964ee67d686c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a67f4c5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9,PodSandboxId:834f9cbce565cf0a59364cd782b0e4edbe4834a232df6df0aaafdc4bd7130864,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421187359252463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74981adc5b9d59cd235f804f7b09fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f973b471-3979-481d-b11b-11c9cae9851b name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.271208445Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d955f2d-0de8-42a5-8408-fb61de8151a9 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.271288300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d955f2d-0de8-42a5-8408-fb61de8151a9 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.273354543Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0fb623e-a5ff-4341-82c4-673fa3c4fc35 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.273770998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422538273743636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0fb623e-a5ff-4341-82c4-673fa3c4fc35 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.274619270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=822f536b-8f3d-44ae-a11e-d67db9466779 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.274730653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=822f536b-8f3d-44ae-a11e-d67db9466779 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.275208364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421222990630090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f1a04bc18d4b5afb60abd8f5cc2c1502fe9b02888477d81d21621cceed451c,PodSandboxId:6a08429e8c4823cbc29bf41bf26f56ab428639313edcad5037de9566d3a6983f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714421202883574425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60422741-64fe-4169-bdbd-384825776aef,},Annotations:map[string]string{io.kubernetes.container.hash: 8545d2cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52,PodSandboxId:e4a8e598d93b3af609a80df6b75698559b2b6e086a04706aec5ad4fbbf311ba8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421199795247389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7m65s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72397559-b0da-492a-be1c-297027021f50,},Annotations:map[string]string{io.kubernetes.container.hash: 51500de8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714421192155164718,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561,PodSandboxId:b8bf49dccc6d886bc7628b38f50835c95ec5329e881e094eac6e5b0fce75b52f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714421192089263486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zddtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47956c-26c1-48e2-8f42-a2a
81d201503,},Annotations:map[string]string{io.kubernetes.container.hash: b9b15c9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0,PodSandboxId:01b4b04f083a312f923e21ae7f5b4c1318fab64fd7f62482c873f8078d56022b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421187521686479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077414c522aee9483d3819d99
7b879c8,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f,PodSandboxId:34abaa6dac5ebedec40d5b604770433edb44465efaee911ec475837813e22cc7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421187485479944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7177a093ebd5743fc5b68cae5a3d2c0,},Annotations:map[string
]string{io.kubernetes.container.hash: cf1ccb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552,PodSandboxId:62b9000d26f2d365735496701ac01757eb9ee92273cb805b8499089443a85493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421187431442344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960e82e54b5cb1fc11c964ee67d686c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a67f4c5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9,PodSandboxId:834f9cbce565cf0a59364cd782b0e4edbe4834a232df6df0aaafdc4bd7130864,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421187359252463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74981adc5b9d59cd235f804f7b09fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=822f536b-8f3d-44ae-a11e-d67db9466779 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.306545127Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=d7c637b5-bb88-4b6f-971f-eb31282e7a9a name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.307073441Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6a08429e8c4823cbc29bf41bf26f56ab428639313edcad5037de9566d3a6983f,Metadata:&PodSandboxMetadata{Name:busybox,Uid:60422741-64fe-4169-bdbd-384825776aef,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421199452093994,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60422741-64fe-4169-bdbd-384825776aef,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T20:06:31.663163490Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4a8e598d93b3af609a80df6b75698559b2b6e086a04706aec5ad4fbbf311ba8,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7m65s,Uid:72397559-b0da-492a-be1c-297027021f50,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:171442
1199448326879,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7m65s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72397559-b0da-492a-be1c-297027021f50,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T20:06:31.663150673Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:43cd61d009d2ca71148e08ceb13fed0503212dd34b9145a5ef3ef0325963979d,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-g6gw2,Uid:7a4b0494-73fb-4444-a8c1-544885a2d873,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421197746573447,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-g6gw2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a4b0494-73fb-4444-a8c1-544885a2d873,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29
T20:06:31.663158110Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b8bf49dccc6d886bc7628b38f50835c95ec5329e881e094eac6e5b0fce75b52f,Metadata:&PodSandboxMetadata{Name:kube-proxy-zddtx,Uid:3d47956c-26c1-48e2-8f42-a2a81d201503,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421191978809085,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zddtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47956c-26c1-48e2-8f42-a2a81d201503,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-29T20:06:31.663160980Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:160d0154-7417-454b-a253-28c67b85f951,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421191978449359,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2024-04-29T20:06:31.663159711Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:62b9000d26f2d365735496701ac01757eb9ee92273cb805b8499089443a85493,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-866143,Uid:960e82e54b5cb1fc11c964ee67d686c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421187187437251,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960e82e54b5cb1fc11c964ee67d686c9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.106:8444,kubernetes.io/config.hash: 960e82e54b5cb1fc11c964ee67d686c9,kubernetes.io/config.seen: 2024-04-29T20:06:26.652971105Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:34abaa6dac5ebedec40d5b604770433edb44465efaee911ec475837813e22c
c7,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-866143,Uid:b7177a093ebd5743fc5b68cae5a3d2c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421187176376079,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7177a093ebd5743fc5b68cae5a3d2c0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.106:2379,kubernetes.io/config.hash: b7177a093ebd5743fc5b68cae5a3d2c0,kubernetes.io/config.seen: 2024-04-29T20:06:26.703310730Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:834f9cbce565cf0a59364cd782b0e4edbe4834a232df6df0aaafdc4bd7130864,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-866143,Uid:c74981adc5b9d59cd235f804f7b09fc3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421187171145971,Labels:map[s
tring]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74981adc5b9d59cd235f804f7b09fc3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c74981adc5b9d59cd235f804f7b09fc3,kubernetes.io/config.seen: 2024-04-29T20:06:26.652976216Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:01b4b04f083a312f923e21ae7f5b4c1318fab64fd7f62482c873f8078d56022b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-866143,Uid:077414c522aee9483d3819d997b879c8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1714421187166855173,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077414c522aee9483d3819d997b879c8,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: 077414c522aee9483d3819d997b879c8,kubernetes.io/config.seen: 2024-04-29T20:06:26.652977311Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d7c637b5-bb88-4b6f-971f-eb31282e7a9a name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.307989288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4ffea13-5809-48d2-8d93-b9cb95d885a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.308074040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4ffea13-5809-48d2-8d93-b9cb95d885a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.308394151Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421222990630090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f1a04bc18d4b5afb60abd8f5cc2c1502fe9b02888477d81d21621cceed451c,PodSandboxId:6a08429e8c4823cbc29bf41bf26f56ab428639313edcad5037de9566d3a6983f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714421202883574425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60422741-64fe-4169-bdbd-384825776aef,},Annotations:map[string]string{io.kubernetes.container.hash: 8545d2cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52,PodSandboxId:e4a8e598d93b3af609a80df6b75698559b2b6e086a04706aec5ad4fbbf311ba8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421199795247389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7m65s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72397559-b0da-492a-be1c-297027021f50,},Annotations:map[string]string{io.kubernetes.container.hash: 51500de8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714421192155164718,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561,PodSandboxId:b8bf49dccc6d886bc7628b38f50835c95ec5329e881e094eac6e5b0fce75b52f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714421192089263486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zddtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47956c-26c1-48e2-8f42-a2a
81d201503,},Annotations:map[string]string{io.kubernetes.container.hash: b9b15c9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0,PodSandboxId:01b4b04f083a312f923e21ae7f5b4c1318fab64fd7f62482c873f8078d56022b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421187521686479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077414c522aee9483d3819d99
7b879c8,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f,PodSandboxId:34abaa6dac5ebedec40d5b604770433edb44465efaee911ec475837813e22cc7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421187485479944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7177a093ebd5743fc5b68cae5a3d2c0,},Annotations:map[string
]string{io.kubernetes.container.hash: cf1ccb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552,PodSandboxId:62b9000d26f2d365735496701ac01757eb9ee92273cb805b8499089443a85493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421187431442344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960e82e54b5cb1fc11c964ee67d686c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a67f4c5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9,PodSandboxId:834f9cbce565cf0a59364cd782b0e4edbe4834a232df6df0aaafdc4bd7130864,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421187359252463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74981adc5b9d59cd235f804f7b09fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4ffea13-5809-48d2-8d93-b9cb95d885a9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.321335256Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c864d31-93ad-4b43-934f-f9a971118171 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.321440212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c864d31-93ad-4b43-934f-f9a971118171 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.323044927Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=245af225-21e8-4b71-9941-a3b762831c4c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.323835087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422538323798956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=245af225-21e8-4b71-9941-a3b762831c4c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.324705667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0b30c7d-c1cf-4866-a8d8-bf55f70dfa19 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.324780674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0b30c7d-c1cf-4866-a8d8-bf55f70dfa19 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.325168146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421222990630090,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9f1a04bc18d4b5afb60abd8f5cc2c1502fe9b02888477d81d21621cceed451c,PodSandboxId:6a08429e8c4823cbc29bf41bf26f56ab428639313edcad5037de9566d3a6983f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1714421202883574425,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60422741-64fe-4169-bdbd-384825776aef,},Annotations:map[string]string{io.kubernetes.container.hash: 8545d2cf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52,PodSandboxId:e4a8e598d93b3af609a80df6b75698559b2b6e086a04706aec5ad4fbbf311ba8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421199795247389,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7m65s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72397559-b0da-492a-be1c-297027021f50,},Annotations:map[string]string{io.kubernetes.container.hash: 51500de8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9,PodSandboxId:c91cb288bef7c0915cbec0bc7e90279e72ac06f00ec199913b3827cace15c009,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1714421192155164718,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: 160d0154-7417-454b-a253-28c67b85f951,},Annotations:map[string]string{io.kubernetes.container.hash: 98bef5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561,PodSandboxId:b8bf49dccc6d886bc7628b38f50835c95ec5329e881e094eac6e5b0fce75b52f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1714421192089263486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zddtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d47956c-26c1-48e2-8f42-a2a
81d201503,},Annotations:map[string]string{io.kubernetes.container.hash: b9b15c9d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0,PodSandboxId:01b4b04f083a312f923e21ae7f5b4c1318fab64fd7f62482c873f8078d56022b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421187521686479,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077414c522aee9483d3819d99
7b879c8,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f,PodSandboxId:34abaa6dac5ebedec40d5b604770433edb44465efaee911ec475837813e22cc7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421187485479944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7177a093ebd5743fc5b68cae5a3d2c0,},Annotations:map[string
]string{io.kubernetes.container.hash: cf1ccb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552,PodSandboxId:62b9000d26f2d365735496701ac01757eb9ee92273cb805b8499089443a85493,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421187431442344,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960e82e54b5cb1fc11c964ee67d686c9,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a67f4c5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9,PodSandboxId:834f9cbce565cf0a59364cd782b0e4edbe4834a232df6df0aaafdc4bd7130864,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421187359252463,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-866143,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c74981adc5b9d59cd235f804f7b09fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0b30c7d-c1cf-4866-a8d8-bf55f70dfa19 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.352702391Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5c6a71e6-401d-42d1-8457-a0fa7a45426a name=/runtime.v1.ImageService/ListImages
	Apr 29 20:28:58 default-k8s-diff-port-866143 crio[727]: time="2024-04-29 20:28:58.353808872Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,RepoTags:[registry.k8s.io/kube-apiserver:v1.30.0],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81 registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3],Size_:117609952,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450],Size_:112170310,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinn
ed:false,},&Image{Id:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,RepoTags:[registry.k8s.io/kube-scheduler:v1.30.0],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67 registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a],Size_:63026502,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,RepoTags:[registry.k8s.io/kube-proxy:v1.30.0],RepoDigests:[registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68 registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210],Size_:85932953,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b4
80cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f56173
42c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,RepoTags:[docker.io/kindest/kindnetd:v20240202-8f1494ea],RepoDigests:[docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988 docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac],Size_:65291810,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,RepoTags:[gcr.io/k8s-minikube/busybox:1.28.4-glibc],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c
5a6f00e gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998],Size_:4631262,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=5c6a71e6-401d-42d1-8457-a0fa7a45426a name=/runtime.v1.ImageService/ListImages
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55a4d86ba249f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   c91cb288bef7c       storage-provisioner
	b9f1a04bc18d4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   6a08429e8c482       busybox
	ff819232db9ec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      22 minutes ago      Running             coredns                   1                   e4a8e598d93b3       coredns-7db6d8ff4d-7m65s
	d235258efef8b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   c91cb288bef7c       storage-provisioner
	5291e43ebc5a3       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      22 minutes ago      Running             kube-proxy                1                   b8bf49dccc6d8       kube-proxy-zddtx
	38c3d9d672593       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      22 minutes ago      Running             kube-scheduler            1                   01b4b04f083a3       kube-scheduler-default-k8s-diff-port-866143
	7813548bb1ebb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      22 minutes ago      Running             etcd                      1                   34abaa6dac5eb       etcd-default-k8s-diff-port-866143
	40e61b985a70c       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      22 minutes ago      Running             kube-apiserver            1                   62b9000d26f2d       kube-apiserver-default-k8s-diff-port-866143
	453c723fef9ad       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      22 minutes ago      Running             kube-controller-manager   1                   834f9cbce565c       kube-controller-manager-default-k8s-diff-port-866143
	
	
	==> coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49827 - 59453 "HINFO IN 708020101607324385.5107843508713828177. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014125611s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-866143
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-866143
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=default-k8s-diff-port-866143
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T19_59_40_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:59:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-866143
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:28:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:27:26 +0000   Mon, 29 Apr 2024 19:59:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:27:26 +0000   Mon, 29 Apr 2024 19:59:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:27:26 +0000   Mon, 29 Apr 2024 19:59:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:27:26 +0000   Mon, 29 Apr 2024 20:06:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.106
	  Hostname:    default-k8s-diff-port-866143
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a12ab39f5cd241eeaeb7bd76cd5f62dd
	  System UUID:                a12ab39f-5cd2-41ee-aeb7-bd76cd5f62dd
	  Boot ID:                    e2aa995e-fe3a-4c45-a4f2-3707115a5739
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-7m65s                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-866143                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-866143             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-866143    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-zddtx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-866143             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-g6gw2                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-866143 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-866143 event: Registered Node default-k8s-diff-port-866143 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-866143 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-866143 event: Registered Node default-k8s-diff-port-866143 in Controller
	
	
	==> dmesg <==
	[Apr29 20:06] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063609] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049309] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.117522] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.566583] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.599501] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.366829] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.061114] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067741] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.195585] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.141738] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.349230] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +5.293185] systemd-fstab-generator[809]: Ignoring "noauto" option for root device
	[  +0.066389] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.742309] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +5.633764] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.475734] systemd-fstab-generator[1546]: Ignoring "noauto" option for root device
	[  +3.264132] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.189664] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] <==
	{"level":"info","ts":"2024-04-29T20:27:39.714277Z","caller":"traceutil/trace.go:171","msg":"trace[1208906735] linearizableReadLoop","detail":"{readStateIndex:1922; appliedIndex:1919; }","duration":"1.617079346s","start":"2024-04-29T20:27:38.097174Z","end":"2024-04-29T20:27:39.714253Z","steps":["trace[1208906735] 'read index received'  (duration: 1.113946958s)","trace[1208906735] 'applied index is now lower than readState.Index'  (duration: 503.131599ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T20:27:39.714559Z","caller":"traceutil/trace.go:171","msg":"trace[1745942450] transaction","detail":"{read_only:false; response_revision:1628; number_of_response:1; }","duration":"1.293889055s","start":"2024-04-29T20:27:38.420652Z","end":"2024-04-29T20:27:39.714541Z","steps":["trace[1745942450] 'process raft request'  (duration: 1.293084509s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:27:39.714688Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:27:38.420634Z","time spent":"1.293997141s","remote":"127.0.0.1:57328","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-vtiqnbkdbazwttajmqs37mtdee\" mod_revision:1620 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-vtiqnbkdbazwttajmqs37mtdee\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-vtiqnbkdbazwttajmqs37mtdee\" > >"}
	{"level":"warn","ts":"2024-04-29T20:27:39.715057Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.617871463s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:27:39.715153Z","caller":"traceutil/trace.go:171","msg":"trace[990527154] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1628; }","duration":"1.617999457s","start":"2024-04-29T20:27:38.097142Z","end":"2024-04-29T20:27:39.715142Z","steps":["trace[990527154] 'agreement among raft nodes before linearized reading'  (duration: 1.617841274s)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:27:39.715774Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:27:38.097125Z","time spent":"1.618631505s","remote":"127.0.0.1:57070","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-04-29T20:27:39.71613Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"873.39919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:27:39.717365Z","caller":"traceutil/trace.go:171","msg":"trace[1950765648] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1628; }","duration":"874.632318ms","start":"2024-04-29T20:27:38.842717Z","end":"2024-04-29T20:27:39.717349Z","steps":["trace[1950765648] 'agreement among raft nodes before linearized reading'  (duration: 873.379115ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:27:39.717447Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:27:38.84267Z","time spent":"874.763489ms","remote":"127.0.0.1:57282","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":27,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-04-29T20:27:39.716364Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"848.281201ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-04-29T20:27:39.717556Z","caller":"traceutil/trace.go:171","msg":"trace[555169078] range","detail":"{range_begin:/registry/deployments/; range_end:/registry/deployments0; response_count:0; response_revision:1628; }","duration":"849.583756ms","start":"2024-04-29T20:27:38.867957Z","end":"2024-04-29T20:27:39.717541Z","steps":["trace[555169078] 'agreement among raft nodes before linearized reading'  (duration: 848.27739ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:27:39.718078Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:27:38.867941Z","time spent":"850.087822ms","remote":"127.0.0.1:57520","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":2,"response size":29,"request content":"key:\"/registry/deployments/\" range_end:\"/registry/deployments0\" count_only:true "}
	{"level":"info","ts":"2024-04-29T20:28:18.916492Z","caller":"traceutil/trace.go:171","msg":"trace[659935616] transaction","detail":"{read_only:false; response_revision:1660; number_of_response:1; }","duration":"161.759988ms","start":"2024-04-29T20:28:18.754636Z","end":"2024-04-29T20:28:18.916396Z","steps":["trace[659935616] 'process raft request'  (duration: 161.212448ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:28:23.065603Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.557623ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13035826538271329067 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.106\" mod_revision:1655 > success:<request_put:<key:\"/registry/masterleases/192.168.61.106\" value_size:68 lease:3812454501416553257 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.106\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-29T20:28:23.066414Z","caller":"traceutil/trace.go:171","msg":"trace[162158979] transaction","detail":"{read_only:false; response_revision:1664; number_of_response:1; }","duration":"254.612151ms","start":"2024-04-29T20:28:22.811769Z","end":"2024-04-29T20:28:23.066381Z","steps":["trace[162158979] 'process raft request'  (duration: 122.971437ms)","trace[162158979] 'compare'  (duration: 130.374991ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-29T20:28:23.0665Z","caller":"traceutil/trace.go:171","msg":"trace[1822868115] linearizableReadLoop","detail":"{readStateIndex:1967; appliedIndex:1966; }","duration":"227.349645ms","start":"2024-04-29T20:28:22.839014Z","end":"2024-04-29T20:28:23.066364Z","steps":["trace[1822868115] 'read index received'  (duration: 95.664258ms)","trace[1822868115] 'applied index is now lower than readState.Index'  (duration: 131.683907ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T20:28:23.066931Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.802229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:28:23.067015Z","caller":"traceutil/trace.go:171","msg":"trace[202935862] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1664; }","duration":"228.027097ms","start":"2024-04-29T20:28:22.838963Z","end":"2024-04-29T20:28:23.06699Z","steps":["trace[202935862] 'agreement among raft nodes before linearized reading'  (duration: 227.642447ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:28:23.58767Z","caller":"traceutil/trace.go:171","msg":"trace[1648182283] transaction","detail":"{read_only:false; response_revision:1665; number_of_response:1; }","duration":"102.946983ms","start":"2024-04-29T20:28:23.484706Z","end":"2024-04-29T20:28:23.587653Z","steps":["trace[1648182283] 'process raft request'  (duration: 102.483571ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-29T20:28:47.942192Z","caller":"traceutil/trace.go:171","msg":"trace[19918005] transaction","detail":"{read_only:false; response_revision:1684; number_of_response:1; }","duration":"195.08006ms","start":"2024-04-29T20:28:47.747092Z","end":"2024-04-29T20:28:47.942172Z","steps":["trace[19918005] 'process raft request'  (duration: 194.740782ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:28:48.178769Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.100667ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13035826538271329192 > lease_revoke:<id:34e88f2b7745cb56>","response":"size:27"}
	{"level":"info","ts":"2024-04-29T20:28:48.178965Z","caller":"traceutil/trace.go:171","msg":"trace[1142200702] linearizableReadLoop","detail":"{readStateIndex:1992; appliedIndex:1990; }","duration":"340.593851ms","start":"2024-04-29T20:28:47.838348Z","end":"2024-04-29T20:28:48.178942Z","steps":["trace[1142200702] 'read index received'  (duration: 103.638077ms)","trace[1142200702] 'applied index is now lower than readState.Index'  (duration: 236.953823ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-29T20:28:48.17922Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.890132ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-29T20:28:48.179765Z","caller":"traceutil/trace.go:171","msg":"trace[2132792553] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1684; }","duration":"341.456132ms","start":"2024-04-29T20:28:47.838281Z","end":"2024-04-29T20:28:48.179737Z","steps":["trace[2132792553] 'agreement among raft nodes before linearized reading'  (duration: 340.884845ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-29T20:28:48.179856Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-29T20:28:47.838262Z","time spent":"341.571845ms","remote":"127.0.0.1:57282","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":27,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	
	
	==> kernel <==
	 20:28:59 up 22 min,  0 users,  load average: 0.18, 0.16, 0.11
	Linux default-k8s-diff-port-866143 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] <==
	W0429 20:26:32.468964       1 handler_proxy.go:93] no RequestInfo found in the context
	W0429 20:26:32.468988       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:26:32.469223       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:26:32.469250       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0429 20:26:32.469315       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:26:32.470593       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:27:32.470134       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:27:32.470196       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:27:32.470206       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:27:32.471417       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:27:32.471508       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:27:32.471515       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0429 20:27:39.212845       1 trace.go:236] Trace[633671969]: "Update" accept:application/json, */*,audit-id:8467bd47-3c32-4784-ab0f-c4173e72c2b5,client:192.168.61.106,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (29-Apr-2024 20:27:37.919) (total time: 1293ms):
	Trace[633671969]: ["GuaranteedUpdate etcd3" audit-id:8467bd47-3c32-4784-ab0f-c4173e72c2b5,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 1293ms (20:27:37.919)
	Trace[633671969]:  ---"Txn call completed" 1292ms (20:27:39.212)]
	Trace[633671969]: [1.293688801s] [1.293688801s] END
	I0429 20:27:39.716804       1 trace.go:236] Trace[570810229]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:f17a4446-e605-4df1-9e34-71ae04ca6828,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:apiserver-vtiqnbkdbazwttajmqs37mtdee,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-vtiqnbkdbazwttajmqs37mtdee,user-agent:kube-apiserver/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (29-Apr-2024 20:27:38.419) (total time: 1297ms):
	Trace[570810229]: ["GuaranteedUpdate etcd3" audit-id:f17a4446-e605-4df1-9e34-71ae04ca6828,key:/leases/kube-system/apiserver-vtiqnbkdbazwttajmqs37mtdee,type:*coordination.Lease,resource:leases.coordination.k8s.io 1297ms (20:27:38.419)
	Trace[570810229]:  ---"Txn call completed" 1296ms (20:27:39.716)]
	Trace[570810229]: [1.297366431s] [1.297366431s] END
	I0429 20:27:39.718224       1 trace.go:236] Trace[499317563]: "List" accept:application/json, */*,audit-id:df9689d9-f092-4b49-a948-5f455744a699,client:192.168.61.1,api-group:,api-version:v1,name:,subresource:,namespace:kubernetes-dashboard,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kubernetes-dashboard/pods,user-agent:e2e-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (29-Apr-2024 20:27:38.842) (total time: 876ms):
	Trace[499317563]: ["List(recursive=true) etcd3" audit-id:df9689d9-f092-4b49-a948-5f455744a699,key:/pods/kubernetes-dashboard,resourceVersion:,resourceVersionMatch:,limit:0,continue: 875ms (20:27:38.842)]
	Trace[499317563]: [876.020802ms] [876.020802ms] END
	
	
	==> kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] <==
	I0429 20:23:15.634996       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:23:44.924830       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:23:45.642623       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:24:14.931154       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:24:15.653460       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:24:44.936086       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:24:45.663054       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:25:14.940834       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:25:15.670457       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:25:44.946426       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:25:45.685469       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:26:14.953359       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:26:15.695449       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:26:44.961089       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:26:45.717218       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:27:14.968427       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:27:15.726566       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:27:44.976669       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:27:45.737002       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0429 20:28:07.755321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="245.953µs"
	E0429 20:28:14.983527       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:28:15.748732       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0429 20:28:18.920541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="401.172µs"
	E0429 20:28:44.989672       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:28:45.759300       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] <==
	I0429 20:06:32.289757       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:06:32.299308       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.106"]
	I0429 20:06:32.347355       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:06:32.347455       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:06:32.347485       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:06:32.351154       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:06:32.351434       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:06:32.351479       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:06:32.352641       1 config.go:192] "Starting service config controller"
	I0429 20:06:32.352689       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:06:32.352725       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:06:32.352741       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:06:32.354511       1 config.go:319] "Starting node config controller"
	I0429 20:06:32.354554       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:06:32.453604       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 20:06:32.453691       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:06:32.455597       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] <==
	I0429 20:06:29.096365       1 serving.go:380] Generated self-signed cert in-memory
	I0429 20:06:31.565129       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0429 20:06:31.571023       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:06:31.588739       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0429 20:06:31.588853       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0429 20:06:31.588939       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0429 20:06:31.588962       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0429 20:06:31.599329       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0429 20:06:31.599387       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0429 20:06:31.599406       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0429 20:06:31.599411       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0429 20:06:31.689161       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0429 20:06:31.700597       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0429 20:06:31.700699       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:26:26 default-k8s-diff-port-866143 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:26:33 default-k8s-diff-port-866143 kubelet[939]: E0429 20:26:33.737041     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:26:48 default-k8s-diff-port-866143 kubelet[939]: E0429 20:26:48.736845     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:26:59 default-k8s-diff-port-866143 kubelet[939]: E0429 20:26:59.736444     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:27:11 default-k8s-diff-port-866143 kubelet[939]: E0429 20:27:11.737190     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:27:24 default-k8s-diff-port-866143 kubelet[939]: E0429 20:27:24.736577     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:27:26 default-k8s-diff-port-866143 kubelet[939]: E0429 20:27:26.759556     939 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:27:26 default-k8s-diff-port-866143 kubelet[939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:27:26 default-k8s-diff-port-866143 kubelet[939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:27:26 default-k8s-diff-port-866143 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:27:26 default-k8s-diff-port-866143 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:27:39 default-k8s-diff-port-866143 kubelet[939]: E0429 20:27:39.735944     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:27:54 default-k8s-diff-port-866143 kubelet[939]: E0429 20:27:54.762078     939 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 29 20:27:54 default-k8s-diff-port-866143 kubelet[939]: E0429 20:27:54.762377     939 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 29 20:27:54 default-k8s-diff-port-866143 kubelet[939]: E0429 20:27:54.762619     939 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-542jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-g6gw2_kube-system(7a4b0494-73fb-4444-a8c1-544885a2d873): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 29 20:27:54 default-k8s-diff-port-866143 kubelet[939]: E0429 20:27:54.762717     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:28:07 default-k8s-diff-port-866143 kubelet[939]: E0429 20:28:07.737387     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:28:18 default-k8s-diff-port-866143 kubelet[939]: E0429 20:28:18.738968     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:28:26 default-k8s-diff-port-866143 kubelet[939]: E0429 20:28:26.762675     939 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:28:26 default-k8s-diff-port-866143 kubelet[939]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:28:26 default-k8s-diff-port-866143 kubelet[939]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:28:26 default-k8s-diff-port-866143 kubelet[939]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:28:26 default-k8s-diff-port-866143 kubelet[939]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:28:30 default-k8s-diff-port-866143 kubelet[939]: E0429 20:28:30.736477     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	Apr 29 20:28:45 default-k8s-diff-port-866143 kubelet[939]: E0429 20:28:45.736655     939 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-g6gw2" podUID="7a4b0494-73fb-4444-a8c1-544885a2d873"
	
	
	==> storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] <==
	I0429 20:07:03.128071       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 20:07:03.140310       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 20:07:03.140368       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 20:07:20.549646       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 20:07:20.550253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-866143_c7b182aa-9dc5-483a-a251-942834c1c696!
	I0429 20:07:20.552164       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c09addf-7050-4b36-b55d-ddcd2ef1ab98", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-866143_c7b182aa-9dc5-483a-a251-942834c1c696 became leader
	I0429 20:07:20.651082       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-866143_c7b182aa-9dc5-483a-a251-942834c1c696!
	
	
	==> storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] <==
	I0429 20:06:32.251851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0429 20:07:02.256515       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-866143 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-g6gw2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-866143 describe pod metrics-server-569cc877fc-g6gw2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-866143 describe pod metrics-server-569cc877fc-g6gw2: exit status 1 (99.859811ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-g6gw2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-866143 describe pod metrics-server-569cc877fc-g6gw2: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (538.28s)
E0429 20:30:23.194051   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (344.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-456788 -n no-preload-456788
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-29 20:26:00.15163331 +0000 UTC m=+6409.799008435
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-456788 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-456788 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.588µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-456788 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456788 -n no-preload-456788
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-456788 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-456788 logs -n 25: (1.348964405s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-437743 -- sudo                         | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-437743                                 | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-161370            | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-456788             | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-193781 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | disable-driver-mounts-193781                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 20:00 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-866143  | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-161370                 | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-919612        | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-456788                  | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:01 UTC | 29 Apr 24 20:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-919612             | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-866143       | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:10 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:25 UTC | 29 Apr 24 20:25 UTC |
	| start   | -p newest-cni-538390 --memory=2200 --alsologtostderr   | newest-cni-538390            | jenkins | v1.33.0 | 29 Apr 24 20:25 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:25:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:25:45.869766   73439 out.go:291] Setting OutFile to fd 1 ...
	I0429 20:25:45.869882   73439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:25:45.869896   73439 out.go:304] Setting ErrFile to fd 2...
	I0429 20:25:45.869900   73439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:25:45.870126   73439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 20:25:45.870871   73439 out.go:298] Setting JSON to false
	I0429 20:25:45.871899   73439 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7644,"bootTime":1714414702,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 20:25:45.871971   73439 start.go:139] virtualization: kvm guest
	I0429 20:25:45.874647   73439 out.go:177] * [newest-cni-538390] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 20:25:45.876452   73439 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:25:45.876472   73439 notify.go:220] Checking for updates...
	I0429 20:25:45.877872   73439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:25:45.879743   73439 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:25:45.881328   73439 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:25:45.882857   73439 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 20:25:45.884191   73439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:25:45.885864   73439 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:25:45.885956   73439 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:25:45.886043   73439 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:25:45.886158   73439 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:25:45.924323   73439 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 20:25:45.925663   73439 start.go:297] selected driver: kvm2
	I0429 20:25:45.925675   73439 start.go:901] validating driver "kvm2" against <nil>
	I0429 20:25:45.925686   73439 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:25:45.926523   73439 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:25:45.926589   73439 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 20:25:45.943096   73439 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 20:25:45.943152   73439 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0429 20:25:45.943176   73439 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0429 20:25:45.943462   73439 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0429 20:25:45.943525   73439 cni.go:84] Creating CNI manager for ""
	I0429 20:25:45.943537   73439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:25:45.943549   73439 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 20:25:45.943603   73439 start.go:340] cluster config:
	{Name:newest-cni-538390 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-538390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:25:45.943715   73439 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:25:45.946057   73439 out.go:177] * Starting "newest-cni-538390" primary control-plane node in "newest-cni-538390" cluster
	I0429 20:25:45.947301   73439 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:25:45.947338   73439 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 20:25:45.947347   73439 cache.go:56] Caching tarball of preloaded images
	I0429 20:25:45.947433   73439 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 20:25:45.947443   73439 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 20:25:45.947534   73439 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/newest-cni-538390/config.json ...
	I0429 20:25:45.947551   73439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/newest-cni-538390/config.json: {Name:mkf1bef9b651989e52476a1a38917048bdd73efb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:25:45.947679   73439 start.go:360] acquireMachinesLock for newest-cni-538390: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:25:45.947716   73439 start.go:364] duration metric: took 21.209µs to acquireMachinesLock for "newest-cni-538390"
	I0429 20:25:45.947739   73439 start.go:93] Provisioning new machine with config: &{Name:newest-cni-538390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:newest-cni-538390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:25:45.947813   73439 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 20:25:45.949598   73439 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 20:25:45.949760   73439 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:25:45.949806   73439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:25:45.965640   73439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41039
	I0429 20:25:45.966054   73439 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:25:45.966652   73439 main.go:141] libmachine: Using API Version  1
	I0429 20:25:45.966674   73439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:25:45.967036   73439 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:25:45.967254   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetMachineName
	I0429 20:25:45.967428   73439 main.go:141] libmachine: (newest-cni-538390) Calling .DriverName
	I0429 20:25:45.967597   73439 start.go:159] libmachine.API.Create for "newest-cni-538390" (driver="kvm2")
	I0429 20:25:45.967626   73439 client.go:168] LocalClient.Create starting
	I0429 20:25:45.967665   73439 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 20:25:45.967704   73439 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:45.967720   73439 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:45.967772   73439 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 20:25:45.967790   73439 main.go:141] libmachine: Decoding PEM data...
	I0429 20:25:45.967801   73439 main.go:141] libmachine: Parsing certificate...
	I0429 20:25:45.967817   73439 main.go:141] libmachine: Running pre-create checks...
	I0429 20:25:45.967830   73439 main.go:141] libmachine: (newest-cni-538390) Calling .PreCreateCheck
	I0429 20:25:45.968175   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetConfigRaw
	I0429 20:25:45.968583   73439 main.go:141] libmachine: Creating machine...
	I0429 20:25:45.968598   73439 main.go:141] libmachine: (newest-cni-538390) Calling .Create
	I0429 20:25:45.968750   73439 main.go:141] libmachine: (newest-cni-538390) Creating KVM machine...
	I0429 20:25:45.970375   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found existing default KVM network
	I0429 20:25:45.971551   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:45.971409   73461 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c0:7d:18} reservation:<nil>}
	I0429 20:25:45.972407   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:45.972299   73461 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:07:6e:95} reservation:<nil>}
	I0429 20:25:45.973181   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:45.973065   73461 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:2d:8a:41} reservation:<nil>}
	I0429 20:25:45.974224   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:45.974149   73461 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002c1750}
	I0429 20:25:45.974255   73439 main.go:141] libmachine: (newest-cni-538390) DBG | created network xml: 
	I0429 20:25:45.974264   73439 main.go:141] libmachine: (newest-cni-538390) DBG | <network>
	I0429 20:25:45.974275   73439 main.go:141] libmachine: (newest-cni-538390) DBG |   <name>mk-newest-cni-538390</name>
	I0429 20:25:45.974288   73439 main.go:141] libmachine: (newest-cni-538390) DBG |   <dns enable='no'/>
	I0429 20:25:45.974299   73439 main.go:141] libmachine: (newest-cni-538390) DBG |   
	I0429 20:25:45.974312   73439 main.go:141] libmachine: (newest-cni-538390) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0429 20:25:45.974325   73439 main.go:141] libmachine: (newest-cni-538390) DBG |     <dhcp>
	I0429 20:25:45.974339   73439 main.go:141] libmachine: (newest-cni-538390) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0429 20:25:45.974354   73439 main.go:141] libmachine: (newest-cni-538390) DBG |     </dhcp>
	I0429 20:25:45.974366   73439 main.go:141] libmachine: (newest-cni-538390) DBG |   </ip>
	I0429 20:25:45.974373   73439 main.go:141] libmachine: (newest-cni-538390) DBG |   
	I0429 20:25:45.974382   73439 main.go:141] libmachine: (newest-cni-538390) DBG | </network>
	I0429 20:25:45.974389   73439 main.go:141] libmachine: (newest-cni-538390) DBG | 
	I0429 20:25:45.979650   73439 main.go:141] libmachine: (newest-cni-538390) DBG | trying to create private KVM network mk-newest-cni-538390 192.168.72.0/24...
	I0429 20:25:46.052616   73439 main.go:141] libmachine: (newest-cni-538390) DBG | private KVM network mk-newest-cni-538390 192.168.72.0/24 created
	I0429 20:25:46.052651   73439 main.go:141] libmachine: (newest-cni-538390) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390 ...
	I0429 20:25:46.052665   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:46.052534   73461 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:25:46.052760   73439 main.go:141] libmachine: (newest-cni-538390) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 20:25:46.052810   73439 main.go:141] libmachine: (newest-cni-538390) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:25:46.287102   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:46.286951   73461 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390/id_rsa...
	I0429 20:25:46.355016   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:46.354872   73461 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390/newest-cni-538390.rawdisk...
	I0429 20:25:46.355056   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Writing magic tar header
	I0429 20:25:46.355077   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Writing SSH key tar header
	I0429 20:25:46.355137   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:46.355077   73461 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390 ...
	I0429 20:25:46.355267   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390
	I0429 20:25:46.355297   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 20:25:46.355312   73439 main.go:141] libmachine: (newest-cni-538390) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390 (perms=drwx------)
	I0429 20:25:46.355330   73439 main.go:141] libmachine: (newest-cni-538390) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 20:25:46.355344   73439 main.go:141] libmachine: (newest-cni-538390) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 20:25:46.355355   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:25:46.355380   73439 main.go:141] libmachine: (newest-cni-538390) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 20:25:46.355401   73439 main.go:141] libmachine: (newest-cni-538390) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 20:25:46.355415   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 20:25:46.355432   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 20:25:46.355445   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Checking permissions on dir: /home/jenkins
	I0429 20:25:46.355458   73439 main.go:141] libmachine: (newest-cni-538390) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 20:25:46.355481   73439 main.go:141] libmachine: (newest-cni-538390) Creating domain...
	I0429 20:25:46.355504   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Checking permissions on dir: /home
	I0429 20:25:46.355513   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Skipping /home - not owner
	I0429 20:25:46.356722   73439 main.go:141] libmachine: (newest-cni-538390) define libvirt domain using xml: 
	I0429 20:25:46.356754   73439 main.go:141] libmachine: (newest-cni-538390) <domain type='kvm'>
	I0429 20:25:46.356771   73439 main.go:141] libmachine: (newest-cni-538390)   <name>newest-cni-538390</name>
	I0429 20:25:46.356789   73439 main.go:141] libmachine: (newest-cni-538390)   <memory unit='MiB'>2200</memory>
	I0429 20:25:46.356798   73439 main.go:141] libmachine: (newest-cni-538390)   <vcpu>2</vcpu>
	I0429 20:25:46.356805   73439 main.go:141] libmachine: (newest-cni-538390)   <features>
	I0429 20:25:46.356819   73439 main.go:141] libmachine: (newest-cni-538390)     <acpi/>
	I0429 20:25:46.356829   73439 main.go:141] libmachine: (newest-cni-538390)     <apic/>
	I0429 20:25:46.356840   73439 main.go:141] libmachine: (newest-cni-538390)     <pae/>
	I0429 20:25:46.356849   73439 main.go:141] libmachine: (newest-cni-538390)     
	I0429 20:25:46.356879   73439 main.go:141] libmachine: (newest-cni-538390)   </features>
	I0429 20:25:46.356906   73439 main.go:141] libmachine: (newest-cni-538390)   <cpu mode='host-passthrough'>
	I0429 20:25:46.356920   73439 main.go:141] libmachine: (newest-cni-538390)   
	I0429 20:25:46.356930   73439 main.go:141] libmachine: (newest-cni-538390)   </cpu>
	I0429 20:25:46.356942   73439 main.go:141] libmachine: (newest-cni-538390)   <os>
	I0429 20:25:46.356952   73439 main.go:141] libmachine: (newest-cni-538390)     <type>hvm</type>
	I0429 20:25:46.356965   73439 main.go:141] libmachine: (newest-cni-538390)     <boot dev='cdrom'/>
	I0429 20:25:46.356975   73439 main.go:141] libmachine: (newest-cni-538390)     <boot dev='hd'/>
	I0429 20:25:46.357001   73439 main.go:141] libmachine: (newest-cni-538390)     <bootmenu enable='no'/>
	I0429 20:25:46.357021   73439 main.go:141] libmachine: (newest-cni-538390)   </os>
	I0429 20:25:46.357049   73439 main.go:141] libmachine: (newest-cni-538390)   <devices>
	I0429 20:25:46.357071   73439 main.go:141] libmachine: (newest-cni-538390)     <disk type='file' device='cdrom'>
	I0429 20:25:46.357087   73439 main.go:141] libmachine: (newest-cni-538390)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390/boot2docker.iso'/>
	I0429 20:25:46.357113   73439 main.go:141] libmachine: (newest-cni-538390)       <target dev='hdc' bus='scsi'/>
	I0429 20:25:46.357125   73439 main.go:141] libmachine: (newest-cni-538390)       <readonly/>
	I0429 20:25:46.357135   73439 main.go:141] libmachine: (newest-cni-538390)     </disk>
	I0429 20:25:46.357153   73439 main.go:141] libmachine: (newest-cni-538390)     <disk type='file' device='disk'>
	I0429 20:25:46.357175   73439 main.go:141] libmachine: (newest-cni-538390)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 20:25:46.357195   73439 main.go:141] libmachine: (newest-cni-538390)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390/newest-cni-538390.rawdisk'/>
	I0429 20:25:46.357208   73439 main.go:141] libmachine: (newest-cni-538390)       <target dev='hda' bus='virtio'/>
	I0429 20:25:46.357220   73439 main.go:141] libmachine: (newest-cni-538390)     </disk>
	I0429 20:25:46.357230   73439 main.go:141] libmachine: (newest-cni-538390)     <interface type='network'>
	I0429 20:25:46.357240   73439 main.go:141] libmachine: (newest-cni-538390)       <source network='mk-newest-cni-538390'/>
	I0429 20:25:46.357254   73439 main.go:141] libmachine: (newest-cni-538390)       <model type='virtio'/>
	I0429 20:25:46.357265   73439 main.go:141] libmachine: (newest-cni-538390)     </interface>
	I0429 20:25:46.357275   73439 main.go:141] libmachine: (newest-cni-538390)     <interface type='network'>
	I0429 20:25:46.357288   73439 main.go:141] libmachine: (newest-cni-538390)       <source network='default'/>
	I0429 20:25:46.357299   73439 main.go:141] libmachine: (newest-cni-538390)       <model type='virtio'/>
	I0429 20:25:46.357308   73439 main.go:141] libmachine: (newest-cni-538390)     </interface>
	I0429 20:25:46.357315   73439 main.go:141] libmachine: (newest-cni-538390)     <serial type='pty'>
	I0429 20:25:46.357326   73439 main.go:141] libmachine: (newest-cni-538390)       <target port='0'/>
	I0429 20:25:46.357338   73439 main.go:141] libmachine: (newest-cni-538390)     </serial>
	I0429 20:25:46.357368   73439 main.go:141] libmachine: (newest-cni-538390)     <console type='pty'>
	I0429 20:25:46.357400   73439 main.go:141] libmachine: (newest-cni-538390)       <target type='serial' port='0'/>
	I0429 20:25:46.357414   73439 main.go:141] libmachine: (newest-cni-538390)     </console>
	I0429 20:25:46.357425   73439 main.go:141] libmachine: (newest-cni-538390)     <rng model='virtio'>
	I0429 20:25:46.357449   73439 main.go:141] libmachine: (newest-cni-538390)       <backend model='random'>/dev/random</backend>
	I0429 20:25:46.357470   73439 main.go:141] libmachine: (newest-cni-538390)     </rng>
	I0429 20:25:46.357481   73439 main.go:141] libmachine: (newest-cni-538390)     
	I0429 20:25:46.357490   73439 main.go:141] libmachine: (newest-cni-538390)     
	I0429 20:25:46.357498   73439 main.go:141] libmachine: (newest-cni-538390)   </devices>
	I0429 20:25:46.357507   73439 main.go:141] libmachine: (newest-cni-538390) </domain>
	I0429 20:25:46.357517   73439 main.go:141] libmachine: (newest-cni-538390) 
	I0429 20:25:46.362041   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:0d:b0:d6 in network default
	I0429 20:25:46.362678   73439 main.go:141] libmachine: (newest-cni-538390) Ensuring networks are active...
	I0429 20:25:46.362696   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:25:46.363425   73439 main.go:141] libmachine: (newest-cni-538390) Ensuring network default is active
	I0429 20:25:46.363794   73439 main.go:141] libmachine: (newest-cni-538390) Ensuring network mk-newest-cni-538390 is active
	I0429 20:25:46.364326   73439 main.go:141] libmachine: (newest-cni-538390) Getting domain xml...
	I0429 20:25:46.365079   73439 main.go:141] libmachine: (newest-cni-538390) Creating domain...
	I0429 20:25:47.624585   73439 main.go:141] libmachine: (newest-cni-538390) Waiting to get IP...
	I0429 20:25:47.625575   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:25:47.626000   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find current IP address of domain newest-cni-538390 in network mk-newest-cni-538390
	I0429 20:25:47.626061   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:47.626000   73461 retry.go:31] will retry after 210.300044ms: waiting for machine to come up
	I0429 20:25:47.839009   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:25:47.839571   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find current IP address of domain newest-cni-538390 in network mk-newest-cni-538390
	I0429 20:25:47.839601   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:47.839509   73461 retry.go:31] will retry after 362.816036ms: waiting for machine to come up
	I0429 20:25:48.204211   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:25:48.204797   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find current IP address of domain newest-cni-538390 in network mk-newest-cni-538390
	I0429 20:25:48.204826   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:48.204758   73461 retry.go:31] will retry after 484.278712ms: waiting for machine to come up
	I0429 20:25:48.690319   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:25:48.690874   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find current IP address of domain newest-cni-538390 in network mk-newest-cni-538390
	I0429 20:25:48.690899   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:48.690822   73461 retry.go:31] will retry after 604.401632ms: waiting for machine to come up
	I0429 20:25:49.296579   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:25:49.297087   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find current IP address of domain newest-cni-538390 in network mk-newest-cni-538390
	I0429 20:25:49.297120   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:49.297032   73461 retry.go:31] will retry after 648.768983ms: waiting for machine to come up
	I0429 20:25:49.947559   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:25:49.948187   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find current IP address of domain newest-cni-538390 in network mk-newest-cni-538390
	I0429 20:25:49.948222   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:49.948149   73461 retry.go:31] will retry after 794.870729ms: waiting for machine to come up
	I0429 20:25:50.745372   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:25:50.745846   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find current IP address of domain newest-cni-538390 in network mk-newest-cni-538390
	I0429 20:25:50.745870   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:50.745806   73461 retry.go:31] will retry after 1.126980493s: waiting for machine to come up
	I0429 20:25:51.874780   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:25:51.875328   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find current IP address of domain newest-cni-538390 in network mk-newest-cni-538390
	I0429 20:25:51.875362   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:51.875287   73461 retry.go:31] will retry after 1.375383356s: waiting for machine to come up
	I0429 20:25:53.252707   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:25:53.253275   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find current IP address of domain newest-cni-538390 in network mk-newest-cni-538390
	I0429 20:25:53.253306   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:53.253230   73461 retry.go:31] will retry after 1.286012905s: waiting for machine to come up
	I0429 20:25:54.541276   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:25:54.541819   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find current IP address of domain newest-cni-538390 in network mk-newest-cni-538390
	I0429 20:25:54.541852   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:25:54.541760   73461 retry.go:31] will retry after 2.088849543s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.831449156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422360831421142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca00ec14-4e0a-47e5-9ba1-c22344fd7d58 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.831971767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d01c74f-6eb5-4423-ab11-3476b997611d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.832062680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d01c74f-6eb5-4423-ab11-3476b997611d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.832382202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d11f63276693766369907ad330504ed69597491d538cd9b5a329f53e0905107,PodSandboxId:fdd54e79fdd15614a68e32539580048e00223a09cd3114c4bf69b2737edb703d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421471129045087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd1c4813-8889-4f21-b21e-6007eaa163a6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d1a81fa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229a76fc962ea694d3ec4ef1d263c0f74884241f8f6d47bec60d8fa1273589d7,PodSandboxId:88d438c8c8de00704b3928c035bbe2d47c1ef1a06078688142c5aaadfd5a328a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421470274085527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pvhwv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38ee7b3-53fe-4609-9b2b-000f55de5d5c,},Annotations:map[string]string{io.kubernetes.container.hash: 749b4823,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5fc28aade0c3f32cd2a7a12b42b5608b169783d6272faea610cca67ee353b6,PodSandboxId:2f7a61fdbc3d8e688c5bb769ed501cdbda7575ccf57b66dc3b04c63d35cd656f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421469817009543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hcfbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0
b53824-478e-4523-ada4-1cd7ba306c81,},Annotations:map[string]string{io.kubernetes.container.hash: a08c04a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abda1b10e157741997a1ff6231a8d94bae873a8dc8ed5f4f50bcf25058f9ee0d,PodSandboxId:59f981fd24e8e92afd0fe36277fdbdeb4babf75c2e4be2bdde65e7ccd17946dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1714421469373027116,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6m95d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7,},Annotations:map[string]string{io.kubernetes.container.hash: f7b0245a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d547c066386359c26f32a9b3cdfeede872d97f68e253371e03cf4703b6fb2fa,PodSandboxId:488d6ae14da92daa58faf06f5f7bf8ce7a3a353d53ddd0b6f9fe844b52e45d85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421448799909861,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a08ac4ebc8433e053b376f035d670b,},Annotations:map[string]string{io.kubernetes.container.hash: 5d2686,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aa6b64ca6ded6d70a1edc0d5698398537da41a5a6f57ce52c6fd909454eb8ca,PodSandboxId:31c57455e70d7f5d16a47f64a012beb830434cacf5e70f328b54fc0cb61ff641,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421448737556685,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c073a5401d1f6a9264443a37232e7b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72ceac298eb0890d775ddb4eac2119401c8463dcd154f79f99c4532862f3f2e1,PodSandboxId:29122b6de9c841653ddbb98be21ac4f2be0a779ecf87f4a55f9490190caa306a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421448663523381,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205865ab9386e0544ce94281b335d3fa,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f235fbb4c2c97d173f9b1dd90f7c095c5e1b4a857f16f175edd51e9df2e1f13,PodSandboxId:6ab26349fa514b473a3ed37a595a92433d37a0e37b3976677189303140c4a97b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421448669420404,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3448a18c94c0d03ef9134e75fc8da576,},Annotations:map[string]string{io.kubernetes.container.hash: 5612cf45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d01c74f-6eb5-4423-ab11-3476b997611d name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.876411501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f24f08ff-0f27-44a6-8c68-4f6d4e0f0ecc name=/runtime.v1.RuntimeService/Version
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.876483010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f24f08ff-0f27-44a6-8c68-4f6d4e0f0ecc name=/runtime.v1.RuntimeService/Version
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.878120409Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f6b744e-53af-43e1-be89-1e1f32a3ab29 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.878774408Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422360878749589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f6b744e-53af-43e1-be89-1e1f32a3ab29 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.879519269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c06c0b4c-5d0f-4073-8648-a8c8f5e42e89 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.879569740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c06c0b4c-5d0f-4073-8648-a8c8f5e42e89 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.879893544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d11f63276693766369907ad330504ed69597491d538cd9b5a329f53e0905107,PodSandboxId:fdd54e79fdd15614a68e32539580048e00223a09cd3114c4bf69b2737edb703d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421471129045087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd1c4813-8889-4f21-b21e-6007eaa163a6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d1a81fa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229a76fc962ea694d3ec4ef1d263c0f74884241f8f6d47bec60d8fa1273589d7,PodSandboxId:88d438c8c8de00704b3928c035bbe2d47c1ef1a06078688142c5aaadfd5a328a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421470274085527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pvhwv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38ee7b3-53fe-4609-9b2b-000f55de5d5c,},Annotations:map[string]string{io.kubernetes.container.hash: 749b4823,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5fc28aade0c3f32cd2a7a12b42b5608b169783d6272faea610cca67ee353b6,PodSandboxId:2f7a61fdbc3d8e688c5bb769ed501cdbda7575ccf57b66dc3b04c63d35cd656f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421469817009543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hcfbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0
b53824-478e-4523-ada4-1cd7ba306c81,},Annotations:map[string]string{io.kubernetes.container.hash: a08c04a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abda1b10e157741997a1ff6231a8d94bae873a8dc8ed5f4f50bcf25058f9ee0d,PodSandboxId:59f981fd24e8e92afd0fe36277fdbdeb4babf75c2e4be2bdde65e7ccd17946dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1714421469373027116,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6m95d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7,},Annotations:map[string]string{io.kubernetes.container.hash: f7b0245a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d547c066386359c26f32a9b3cdfeede872d97f68e253371e03cf4703b6fb2fa,PodSandboxId:488d6ae14da92daa58faf06f5f7bf8ce7a3a353d53ddd0b6f9fe844b52e45d85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421448799909861,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a08ac4ebc8433e053b376f035d670b,},Annotations:map[string]string{io.kubernetes.container.hash: 5d2686,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aa6b64ca6ded6d70a1edc0d5698398537da41a5a6f57ce52c6fd909454eb8ca,PodSandboxId:31c57455e70d7f5d16a47f64a012beb830434cacf5e70f328b54fc0cb61ff641,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421448737556685,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c073a5401d1f6a9264443a37232e7b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72ceac298eb0890d775ddb4eac2119401c8463dcd154f79f99c4532862f3f2e1,PodSandboxId:29122b6de9c841653ddbb98be21ac4f2be0a779ecf87f4a55f9490190caa306a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421448663523381,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205865ab9386e0544ce94281b335d3fa,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f235fbb4c2c97d173f9b1dd90f7c095c5e1b4a857f16f175edd51e9df2e1f13,PodSandboxId:6ab26349fa514b473a3ed37a595a92433d37a0e37b3976677189303140c4a97b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421448669420404,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3448a18c94c0d03ef9134e75fc8da576,},Annotations:map[string]string{io.kubernetes.container.hash: 5612cf45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c06c0b4c-5d0f-4073-8648-a8c8f5e42e89 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.924692750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=085450d2-c284-4704-92ad-a2f6c63afecd name=/runtime.v1.RuntimeService/Version
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.924766653Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=085450d2-c284-4704-92ad-a2f6c63afecd name=/runtime.v1.RuntimeService/Version
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.927039287Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5df2903d-3700-4159-9343-1f80ca26421b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.927456589Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422360927433918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5df2903d-3700-4159-9343-1f80ca26421b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.927990696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d142d25-f21d-444f-8f30-b52b473c2d9c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.928074879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d142d25-f21d-444f-8f30-b52b473c2d9c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.928423919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d11f63276693766369907ad330504ed69597491d538cd9b5a329f53e0905107,PodSandboxId:fdd54e79fdd15614a68e32539580048e00223a09cd3114c4bf69b2737edb703d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421471129045087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd1c4813-8889-4f21-b21e-6007eaa163a6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d1a81fa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229a76fc962ea694d3ec4ef1d263c0f74884241f8f6d47bec60d8fa1273589d7,PodSandboxId:88d438c8c8de00704b3928c035bbe2d47c1ef1a06078688142c5aaadfd5a328a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421470274085527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pvhwv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38ee7b3-53fe-4609-9b2b-000f55de5d5c,},Annotations:map[string]string{io.kubernetes.container.hash: 749b4823,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5fc28aade0c3f32cd2a7a12b42b5608b169783d6272faea610cca67ee353b6,PodSandboxId:2f7a61fdbc3d8e688c5bb769ed501cdbda7575ccf57b66dc3b04c63d35cd656f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421469817009543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hcfbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0
b53824-478e-4523-ada4-1cd7ba306c81,},Annotations:map[string]string{io.kubernetes.container.hash: a08c04a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abda1b10e157741997a1ff6231a8d94bae873a8dc8ed5f4f50bcf25058f9ee0d,PodSandboxId:59f981fd24e8e92afd0fe36277fdbdeb4babf75c2e4be2bdde65e7ccd17946dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1714421469373027116,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6m95d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7,},Annotations:map[string]string{io.kubernetes.container.hash: f7b0245a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d547c066386359c26f32a9b3cdfeede872d97f68e253371e03cf4703b6fb2fa,PodSandboxId:488d6ae14da92daa58faf06f5f7bf8ce7a3a353d53ddd0b6f9fe844b52e45d85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421448799909861,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a08ac4ebc8433e053b376f035d670b,},Annotations:map[string]string{io.kubernetes.container.hash: 5d2686,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aa6b64ca6ded6d70a1edc0d5698398537da41a5a6f57ce52c6fd909454eb8ca,PodSandboxId:31c57455e70d7f5d16a47f64a012beb830434cacf5e70f328b54fc0cb61ff641,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421448737556685,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c073a5401d1f6a9264443a37232e7b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72ceac298eb0890d775ddb4eac2119401c8463dcd154f79f99c4532862f3f2e1,PodSandboxId:29122b6de9c841653ddbb98be21ac4f2be0a779ecf87f4a55f9490190caa306a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421448663523381,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205865ab9386e0544ce94281b335d3fa,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f235fbb4c2c97d173f9b1dd90f7c095c5e1b4a857f16f175edd51e9df2e1f13,PodSandboxId:6ab26349fa514b473a3ed37a595a92433d37a0e37b3976677189303140c4a97b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421448669420404,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3448a18c94c0d03ef9134e75fc8da576,},Annotations:map[string]string{io.kubernetes.container.hash: 5612cf45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d142d25-f21d-444f-8f30-b52b473c2d9c name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.964856826Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a100cf44-22b5-4d5a-a782-79a3d20e490a name=/runtime.v1.RuntimeService/Version
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.964961699Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a100cf44-22b5-4d5a-a782-79a3d20e490a name=/runtime.v1.RuntimeService/Version
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.977772976Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af432dec-0da5-4989-8505-4534166724d2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.978157694Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422360978134275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af432dec-0da5-4989-8505-4534166724d2 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.979257815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae8df3d7-ac67-44f9-8a67-02425b3b2643 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.979362857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae8df3d7-ac67-44f9-8a67-02425b3b2643 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:00 no-preload-456788 crio[729]: time="2024-04-29 20:26:00.979580987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d11f63276693766369907ad330504ed69597491d538cd9b5a329f53e0905107,PodSandboxId:fdd54e79fdd15614a68e32539580048e00223a09cd3114c4bf69b2737edb703d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1714421471129045087,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd1c4813-8889-4f21-b21e-6007eaa163a6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d1a81fa,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229a76fc962ea694d3ec4ef1d263c0f74884241f8f6d47bec60d8fa1273589d7,PodSandboxId:88d438c8c8de00704b3928c035bbe2d47c1ef1a06078688142c5aaadfd5a328a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421470274085527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-pvhwv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38ee7b3-53fe-4609-9b2b-000f55de5d5c,},Annotations:map[string]string{io.kubernetes.container.hash: 749b4823,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c5fc28aade0c3f32cd2a7a12b42b5608b169783d6272faea610cca67ee353b6,PodSandboxId:2f7a61fdbc3d8e688c5bb769ed501cdbda7575ccf57b66dc3b04c63d35cd656f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421469817009543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hcfbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0
b53824-478e-4523-ada4-1cd7ba306c81,},Annotations:map[string]string{io.kubernetes.container.hash: a08c04a8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abda1b10e157741997a1ff6231a8d94bae873a8dc8ed5f4f50bcf25058f9ee0d,PodSandboxId:59f981fd24e8e92afd0fe36277fdbdeb4babf75c2e4be2bdde65e7ccd17946dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1714421469373027116,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6m95d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7,},Annotations:map[string]string{io.kubernetes.container.hash: f7b0245a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d547c066386359c26f32a9b3cdfeede872d97f68e253371e03cf4703b6fb2fa,PodSandboxId:488d6ae14da92daa58faf06f5f7bf8ce7a3a353d53ddd0b6f9fe844b52e45d85,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421448799909861,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14a08ac4ebc8433e053b376f035d670b,},Annotations:map[string]string{io.kubernetes.container.hash: 5d2686,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aa6b64ca6ded6d70a1edc0d5698398537da41a5a6f57ce52c6fd909454eb8ca,PodSandboxId:31c57455e70d7f5d16a47f64a012beb830434cacf5e70f328b54fc0cb61ff641,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421448737556685,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c073a5401d1f6a9264443a37232e7b,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72ceac298eb0890d775ddb4eac2119401c8463dcd154f79f99c4532862f3f2e1,PodSandboxId:29122b6de9c841653ddbb98be21ac4f2be0a779ecf87f4a55f9490190caa306a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421448663523381,Labels:map[string]string{io.kubernetes.container.name: kub
e-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 205865ab9386e0544ce94281b335d3fa,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f235fbb4c2c97d173f9b1dd90f7c095c5e1b4a857f16f175edd51e9df2e1f13,PodSandboxId:6ab26349fa514b473a3ed37a595a92433d37a0e37b3976677189303140c4a97b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421448669420404,Labels:map[string]string{io.kubernetes.container.name
: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-456788,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3448a18c94c0d03ef9134e75fc8da576,},Annotations:map[string]string{io.kubernetes.container.hash: 5612cf45,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae8df3d7-ac67-44f9-8a67-02425b3b2643 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7d11f63276693       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   fdd54e79fdd15       storage-provisioner
	229a76fc962ea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   88d438c8c8de0       coredns-7db6d8ff4d-pvhwv
	5c5fc28aade0c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   2f7a61fdbc3d8       coredns-7db6d8ff4d-hcfbq
	abda1b10e1577       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   14 minutes ago      Running             kube-proxy                0                   59f981fd24e8e       kube-proxy-6m95d
	6d547c0663863       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   488d6ae14da92       etcd-no-preload-456788
	8aa6b64ca6ded       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   15 minutes ago      Running             kube-scheduler            2                   31c57455e70d7       kube-scheduler-no-preload-456788
	0f235fbb4c2c9       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   15 minutes ago      Running             kube-apiserver            2                   6ab26349fa514       kube-apiserver-no-preload-456788
	72ceac298eb08       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   15 minutes ago      Running             kube-controller-manager   2                   29122b6de9c84       kube-controller-manager-no-preload-456788
	
	
	==> coredns [229a76fc962ea694d3ec4ef1d263c0f74884241f8f6d47bec60d8fa1273589d7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [5c5fc28aade0c3f32cd2a7a12b42b5608b169783d6272faea610cca67ee353b6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-456788
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-456788
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=no-preload-456788
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T20_10_55_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:10:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-456788
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:25:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:21:27 +0000   Mon, 29 Apr 2024 20:10:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:21:27 +0000   Mon, 29 Apr 2024 20:10:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:21:27 +0000   Mon, 29 Apr 2024 20:10:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:21:27 +0000   Mon, 29 Apr 2024 20:11:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.235
	  Hostname:    no-preload-456788
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef89b0258cca4ea6b20778f725a369a5
	  System UUID:                ef89b025-8cca-4ea6-b207-78f725a369a5
	  Boot ID:                    0cc4a78e-ba7c-4855-80b5-3987fa0a2c2a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-hcfbq                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-pvhwv                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-456788                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-456788             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-456788    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-6m95d                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-456788             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-sxgwr              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node no-preload-456788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node no-preload-456788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node no-preload-456788 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node no-preload-456788 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node no-preload-456788 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node no-preload-456788 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node no-preload-456788 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m                kubelet          Node no-preload-456788 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-456788 event: Registered Node no-preload-456788 in Controller
	
	
	==> dmesg <==
	[  +0.042923] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.629509] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.464896] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.729155] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.705460] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.061878] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070433] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.207964] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.156758] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.352328] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[ +16.870529] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	[  +0.063084] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.641919] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device
	[Apr29 20:06] kauditd_printk_skb: 100 callbacks suppressed
	[  +7.380681] kauditd_printk_skb: 52 callbacks suppressed
	[  +7.486488] kauditd_printk_skb: 24 callbacks suppressed
	[Apr29 20:10] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.939334] systemd-fstab-generator[4069]: Ignoring "noauto" option for root device
	[  +4.726296] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.356736] systemd-fstab-generator[4396]: Ignoring "noauto" option for root device
	[Apr29 20:11] systemd-fstab-generator[4623]: Ignoring "noauto" option for root device
	[  +0.128995] kauditd_printk_skb: 14 callbacks suppressed
	[Apr29 20:12] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [6d547c066386359c26f32a9b3cdfeede872d97f68e253371e03cf4703b6fb2fa] <==
	{"level":"info","ts":"2024-04-29T20:10:49.265163Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.235:2380"}
	{"level":"info","ts":"2024-04-29T20:10:49.703774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T20:10:49.703888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T20:10:49.70392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 received MsgPreVoteResp from feb6ae41040cd9b8 at term 1"}
	{"level":"info","ts":"2024-04-29T20:10:49.703932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T20:10:49.703937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 received MsgVoteResp from feb6ae41040cd9b8 at term 2"}
	{"level":"info","ts":"2024-04-29T20:10:49.703946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"feb6ae41040cd9b8 became leader at term 2"}
	{"level":"info","ts":"2024-04-29T20:10:49.703953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: feb6ae41040cd9b8 elected leader feb6ae41040cd9b8 at term 2"}
	{"level":"info","ts":"2024-04-29T20:10:49.706445Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"feb6ae41040cd9b8","local-member-attributes":"{Name:no-preload-456788 ClientURLs:[https://192.168.39.235:2379]}","request-path":"/0/members/feb6ae41040cd9b8/attributes","cluster-id":"1b3c53dd134e6187","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T20:10:49.706668Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:10:49.710526Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:10:49.711016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:10:49.721277Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T20:10:49.728421Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T20:10:49.72735Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.235:2379"}
	{"level":"info","ts":"2024-04-29T20:10:49.727743Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1b3c53dd134e6187","local-member-id":"feb6ae41040cd9b8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:10:49.728754Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:10:49.728835Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:10:49.735635Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T20:20:49.805392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":719}
	{"level":"info","ts":"2024-04-29T20:20:49.815826Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":719,"took":"9.755045ms","hash":4012013937,"current-db-size-bytes":2199552,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2199552,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-04-29T20:20:49.815929Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4012013937,"revision":719,"compact-revision":-1}
	{"level":"info","ts":"2024-04-29T20:25:49.816743Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":962}
	{"level":"info","ts":"2024-04-29T20:25:49.821366Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":962,"took":"3.807225ms","hash":1073052526,"current-db-size-bytes":2199552,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-29T20:25:49.821471Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1073052526,"revision":962,"compact-revision":719}
	
	
	==> kernel <==
	 20:26:01 up 20 min,  0 users,  load average: 0.01, 0.12, 0.18
	Linux no-preload-456788 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f235fbb4c2c97d173f9b1dd90f7c095c5e1b4a857f16f175edd51e9df2e1f13] <==
	I0429 20:20:52.676930       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:21:52.676345       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:21:52.676406       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:21:52.676415       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:21:52.677572       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:21:52.677762       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:21:52.677797       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:23:52.677352       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:23:52.677455       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:23:52.677466       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:23:52.678543       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:23:52.678702       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:23:52.678737       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:25:51.680430       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:25:51.680841       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0429 20:25:52.681354       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:25:52.681522       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:25:52.681762       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:25:52.681405       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:25:52.682004       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:25:52.683220       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [72ceac298eb0890d775ddb4eac2119401c8463dcd154f79f99c4532862f3f2e1] <==
	I0429 20:20:08.393677       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:20:37.914295       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:20:38.404280       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:21:07.921523       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:21:08.416515       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:21:37.926774       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:21:38.425476       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:22:07.934170       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:22:08.435529       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0429 20:22:08.798437       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="113.628µs"
	I0429 20:22:23.791953       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="241.626µs"
	E0429 20:22:37.942144       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:22:38.444999       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:23:07.948421       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:23:08.453813       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:23:37.953751       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:23:38.464027       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:24:07.960746       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:24:08.476646       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:24:37.966356       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:24:38.486758       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:25:07.972090       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:25:08.495563       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:25:37.978159       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:25:38.504928       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [abda1b10e157741997a1ff6231a8d94bae873a8dc8ed5f4f50bcf25058f9ee0d] <==
	I0429 20:11:09.854428       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:11:09.888525       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.235"]
	I0429 20:11:10.248806       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:11:10.248856       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:11:10.248874       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:11:10.252701       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:11:10.252892       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:11:10.252907       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:11:10.262121       1 config.go:192] "Starting service config controller"
	I0429 20:11:10.262151       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:11:10.263396       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:11:10.263458       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:11:10.264243       1 config.go:319] "Starting node config controller"
	I0429 20:11:10.264254       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:11:10.362821       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:11:10.371405       1 shared_informer.go:320] Caches are synced for node config
	I0429 20:11:10.371456       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8aa6b64ca6ded6d70a1edc0d5698398537da41a5a6f57ce52c6fd909454eb8ca] <==
	W0429 20:10:52.546604       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 20:10:52.546725       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 20:10:52.783351       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0429 20:10:52.786502       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0429 20:10:52.834093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 20:10:52.834155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 20:10:52.867383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 20:10:52.867588       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 20:10:52.881495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 20:10:52.881553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 20:10:52.984440       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 20:10:52.984590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 20:10:53.059497       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 20:10:53.059737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 20:10:53.064449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 20:10:53.064832       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 20:10:53.066662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 20:10:53.066762       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0429 20:10:53.117540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 20:10:53.117994       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 20:10:53.117825       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 20:10:53.118114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 20:10:53.154239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 20:10:53.154294       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0429 20:10:55.604810       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:23:54 no-preload-456788 kubelet[4403]: E0429 20:23:54.828000    4403 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:23:54 no-preload-456788 kubelet[4403]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:23:54 no-preload-456788 kubelet[4403]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:23:54 no-preload-456788 kubelet[4403]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:23:54 no-preload-456788 kubelet[4403]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:24:07 no-preload-456788 kubelet[4403]: E0429 20:24:07.774619    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:24:21 no-preload-456788 kubelet[4403]: E0429 20:24:21.775970    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:24:32 no-preload-456788 kubelet[4403]: E0429 20:24:32.774092    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:24:44 no-preload-456788 kubelet[4403]: E0429 20:24:44.773327    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:24:54 no-preload-456788 kubelet[4403]: E0429 20:24:54.832016    4403 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:24:54 no-preload-456788 kubelet[4403]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:24:54 no-preload-456788 kubelet[4403]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:24:54 no-preload-456788 kubelet[4403]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:24:54 no-preload-456788 kubelet[4403]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:24:55 no-preload-456788 kubelet[4403]: E0429 20:24:55.774486    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:25:08 no-preload-456788 kubelet[4403]: E0429 20:25:08.773621    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:25:22 no-preload-456788 kubelet[4403]: E0429 20:25:22.772790    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:25:33 no-preload-456788 kubelet[4403]: E0429 20:25:33.772771    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:25:48 no-preload-456788 kubelet[4403]: E0429 20:25:48.774060    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	Apr 29 20:25:54 no-preload-456788 kubelet[4403]: E0429 20:25:54.833950    4403 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:25:54 no-preload-456788 kubelet[4403]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:25:54 no-preload-456788 kubelet[4403]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:25:54 no-preload-456788 kubelet[4403]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:25:54 no-preload-456788 kubelet[4403]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:26:00 no-preload-456788 kubelet[4403]: E0429 20:26:00.775173    4403 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-sxgwr" podUID="046d28fe-d51e-43ba-9550-d1d7e33d9d84"
	
	
	==> storage-provisioner [7d11f63276693766369907ad330504ed69597491d538cd9b5a329f53e0905107] <==
	I0429 20:11:11.302162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 20:11:11.325107       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 20:11:11.325949       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 20:11:11.347805       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 20:11:11.348910       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-456788_a9e008ad-f36b-43f8-a4f8-c7bbb53e2367!
	I0429 20:11:11.361054       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"19f42fe4-9eff-437d-bb89-d4580910f858", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-456788_a9e008ad-f36b-43f8-a4f8-c7bbb53e2367 became leader
	I0429 20:11:11.451364       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-456788_a9e008ad-f36b-43f8-a4f8-c7bbb53e2367!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-456788 -n no-preload-456788
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-456788 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-sxgwr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-456788 describe pod metrics-server-569cc877fc-sxgwr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-456788 describe pod metrics-server-569cc877fc-sxgwr: exit status 1 (78.232735ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-sxgwr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-456788 describe pod metrics-server-569cc877fc-sxgwr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (344.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (312.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-161370 -n embed-certs-161370
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-29 20:26:19.0467197 +0000 UTC m=+6428.694094836
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-161370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-161370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.608µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-161370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-161370 -n embed-certs-161370
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-161370 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-161370 logs -n 25: (1.514609749s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-161370            | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-456788             | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-193781 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | disable-driver-mounts-193781                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 20:00 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-866143  | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-161370                 | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-919612        | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-456788                  | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:01 UTC | 29 Apr 24 20:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-919612             | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-866143       | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:10 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:25 UTC | 29 Apr 24 20:25 UTC |
	| start   | -p newest-cni-538390 --memory=2200 --alsologtostderr   | newest-cni-538390            | jenkins | v1.33.0 | 29 Apr 24 20:25 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:26 UTC | 29 Apr 24 20:26 UTC |
	| start   | -p auto-870155 --memory=3072                           | auto-870155                  | jenkins | v1.33.0 | 29 Apr 24 20:26 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:26:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:26:03.356114   73820 out.go:291] Setting OutFile to fd 1 ...
	I0429 20:26:03.356396   73820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:26:03.356407   73820 out.go:304] Setting ErrFile to fd 2...
	I0429 20:26:03.356411   73820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:26:03.356668   73820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 20:26:03.357281   73820 out.go:298] Setting JSON to false
	I0429 20:26:03.358205   73820 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7661,"bootTime":1714414702,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 20:26:03.358260   73820 start.go:139] virtualization: kvm guest
	I0429 20:26:03.360641   73820 out.go:177] * [auto-870155] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 20:26:03.362399   73820 notify.go:220] Checking for updates...
	I0429 20:26:03.362411   73820 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:26:03.363881   73820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:26:03.365164   73820 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:26:03.366649   73820 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:26:03.368048   73820 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 20:26:03.369367   73820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:26:03.371020   73820 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:26:03.371124   73820 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:26:03.371264   73820 config.go:182] Loaded profile config "newest-cni-538390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:26:03.371348   73820 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:26:03.406913   73820 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 20:26:03.408444   73820 start.go:297] selected driver: kvm2
	I0429 20:26:03.408462   73820 start.go:901] validating driver "kvm2" against <nil>
	I0429 20:26:03.408473   73820 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:26:03.409354   73820 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:26:03.409426   73820 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 20:26:03.424696   73820 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 20:26:03.424744   73820 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 20:26:03.424950   73820 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:26:03.425009   73820 cni.go:84] Creating CNI manager for ""
	I0429 20:26:03.425021   73820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:26:03.425030   73820 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 20:26:03.425075   73820 start.go:340] cluster config:
	{Name:auto-870155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:auto-870155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:26:03.425166   73820 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:26:03.426849   73820 out.go:177] * Starting "auto-870155" primary control-plane node in "auto-870155" cluster
	I0429 20:26:01.450171   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:01.450755   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find current IP address of domain newest-cni-538390 in network mk-newest-cni-538390
	I0429 20:26:01.450777   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:26:01.450722   73461 retry.go:31] will retry after 2.824222952s: waiting for machine to come up
	I0429 20:26:04.277022   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:04.277480   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find current IP address of domain newest-cni-538390 in network mk-newest-cni-538390
	I0429 20:26:04.277510   73439 main.go:141] libmachine: (newest-cni-538390) DBG | I0429 20:26:04.277443   73461 retry.go:31] will retry after 4.911398109s: waiting for machine to come up
	I0429 20:26:03.428102   73820 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:26:03.428132   73820 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 20:26:03.428138   73820 cache.go:56] Caching tarball of preloaded images
	I0429 20:26:03.428220   73820 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 20:26:03.428231   73820 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 20:26:03.428317   73820 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/auto-870155/config.json ...
	I0429 20:26:03.428339   73820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/auto-870155/config.json: {Name:mk8d31ec18289d94ec7f3f3f087cbc787715ba0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:26:03.428457   73820 start.go:360] acquireMachinesLock for auto-870155: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:26:10.811372   73820 start.go:364] duration metric: took 7.382895093s to acquireMachinesLock for "auto-870155"
	I0429 20:26:10.811455   73820 start.go:93] Provisioning new machine with config: &{Name:auto-870155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:auto-870155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:26:10.811591   73820 start.go:125] createHost starting for "" (driver="kvm2")
	I0429 20:26:09.192340   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.193057   73439 main.go:141] libmachine: (newest-cni-538390) Found IP for machine: 192.168.72.75
	I0429 20:26:09.193072   73439 main.go:141] libmachine: (newest-cni-538390) Reserving static IP address...
	I0429 20:26:09.193086   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has current primary IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.193552   73439 main.go:141] libmachine: (newest-cni-538390) DBG | unable to find host DHCP lease matching {name: "newest-cni-538390", mac: "52:54:00:19:3f:41", ip: "192.168.72.75"} in network mk-newest-cni-538390
	I0429 20:26:09.268655   73439 main.go:141] libmachine: (newest-cni-538390) Reserved static IP address: 192.168.72.75
	I0429 20:26:09.268691   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Getting to WaitForSSH function...
	I0429 20:26:09.268701   73439 main.go:141] libmachine: (newest-cni-538390) Waiting for SSH to be available...
	I0429 20:26:09.271659   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.272298   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:minikube Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:09.272340   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.272547   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Using SSH client type: external
	I0429 20:26:09.272579   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390/id_rsa (-rw-------)
	I0429 20:26:09.272627   73439 main.go:141] libmachine: (newest-cni-538390) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:26:09.272654   73439 main.go:141] libmachine: (newest-cni-538390) DBG | About to run SSH command:
	I0429 20:26:09.272673   73439 main.go:141] libmachine: (newest-cni-538390) DBG | exit 0
	I0429 20:26:09.406546   73439 main.go:141] libmachine: (newest-cni-538390) DBG | SSH cmd err, output: <nil>: 
	I0429 20:26:09.406850   73439 main.go:141] libmachine: (newest-cni-538390) KVM machine creation complete!
	I0429 20:26:09.407119   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetConfigRaw
	I0429 20:26:09.407648   73439 main.go:141] libmachine: (newest-cni-538390) Calling .DriverName
	I0429 20:26:09.407833   73439 main.go:141] libmachine: (newest-cni-538390) Calling .DriverName
	I0429 20:26:09.407983   73439 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 20:26:09.407999   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetState
	I0429 20:26:09.409226   73439 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 20:26:09.409243   73439 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 20:26:09.409252   73439 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 20:26:09.409262   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHHostname
	I0429 20:26:09.411598   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.412071   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:09.412090   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.412281   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHPort
	I0429 20:26:09.412481   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:09.412640   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:09.412757   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHUsername
	I0429 20:26:09.412903   73439 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:09.413156   73439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.75 22 <nil> <nil>}
	I0429 20:26:09.413170   73439 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 20:26:09.526034   73439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:26:09.526091   73439 main.go:141] libmachine: Detecting the provisioner...
	I0429 20:26:09.526103   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHHostname
	I0429 20:26:09.529005   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.529333   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:09.529360   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.529508   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHPort
	I0429 20:26:09.529724   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:09.529906   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:09.530092   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHUsername
	I0429 20:26:09.530302   73439 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:09.530458   73439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.75 22 <nil> <nil>}
	I0429 20:26:09.530469   73439 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 20:26:09.648033   73439 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 20:26:09.648110   73439 main.go:141] libmachine: found compatible host: buildroot
	I0429 20:26:09.648125   73439 main.go:141] libmachine: Provisioning with buildroot...
	I0429 20:26:09.648141   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetMachineName
	I0429 20:26:09.648398   73439 buildroot.go:166] provisioning hostname "newest-cni-538390"
	I0429 20:26:09.648426   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetMachineName
	I0429 20:26:09.648659   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHHostname
	I0429 20:26:09.651125   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.651492   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:09.651524   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.651650   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHPort
	I0429 20:26:09.651859   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:09.652017   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:09.652186   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHUsername
	I0429 20:26:09.652392   73439 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:09.652568   73439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.75 22 <nil> <nil>}
	I0429 20:26:09.652584   73439 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-538390 && echo "newest-cni-538390" | sudo tee /etc/hostname
	I0429 20:26:09.784417   73439 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-538390
	
	I0429 20:26:09.784450   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHHostname
	I0429 20:26:09.787227   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.787576   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:09.787616   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.787761   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHPort
	I0429 20:26:09.787990   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:09.788150   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:09.788320   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHUsername
	I0429 20:26:09.788481   73439 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:09.788644   73439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.75 22 <nil> <nil>}
	I0429 20:26:09.788661   73439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-538390' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-538390/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-538390' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:26:09.912567   73439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:26:09.912601   73439 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:26:09.912654   73439 buildroot.go:174] setting up certificates
	I0429 20:26:09.912669   73439 provision.go:84] configureAuth start
	I0429 20:26:09.912682   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetMachineName
	I0429 20:26:09.913038   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetIP
	I0429 20:26:09.915821   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.916136   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:09.916163   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.916287   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHHostname
	I0429 20:26:09.918555   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.918904   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:09.918941   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:09.919063   73439 provision.go:143] copyHostCerts
	I0429 20:26:09.919123   73439 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:26:09.919133   73439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:26:09.919201   73439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:26:09.919318   73439 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:26:09.919330   73439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:26:09.919369   73439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:26:09.919433   73439 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:26:09.919445   73439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:26:09.919467   73439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:26:09.919516   73439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.newest-cni-538390 san=[127.0.0.1 192.168.72.75 localhost minikube newest-cni-538390]
	I0429 20:26:10.054915   73439 provision.go:177] copyRemoteCerts
	I0429 20:26:10.054970   73439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:26:10.054992   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHHostname
	I0429 20:26:10.057749   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.058091   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:10.058118   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.058324   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHPort
	I0429 20:26:10.058561   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:10.058747   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHUsername
	I0429 20:26:10.058883   73439 sshutil.go:53] new ssh client: &{IP:192.168.72.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390/id_rsa Username:docker}
	I0429 20:26:10.151981   73439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:26:10.181848   73439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 20:26:10.210057   73439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:26:10.237507   73439 provision.go:87] duration metric: took 324.827022ms to configureAuth
	I0429 20:26:10.237532   73439 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:26:10.237705   73439 config.go:182] Loaded profile config "newest-cni-538390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:26:10.237782   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHHostname
	I0429 20:26:10.240422   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.240825   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:10.240855   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.241136   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHPort
	I0429 20:26:10.241344   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:10.241541   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:10.241708   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHUsername
	I0429 20:26:10.241900   73439 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:10.242150   73439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.75 22 <nil> <nil>}
	I0429 20:26:10.242175   73439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:26:10.543145   73439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:26:10.543171   73439 main.go:141] libmachine: Checking connection to Docker...
	I0429 20:26:10.543198   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetURL
	I0429 20:26:10.544431   73439 main.go:141] libmachine: (newest-cni-538390) DBG | Using libvirt version 6000000
	I0429 20:26:10.546728   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.547133   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:10.547164   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.547453   73439 main.go:141] libmachine: Docker is up and running!
	I0429 20:26:10.547474   73439 main.go:141] libmachine: Reticulating splines...
	I0429 20:26:10.547481   73439 client.go:171] duration metric: took 24.579841865s to LocalClient.Create
	I0429 20:26:10.547509   73439 start.go:167] duration metric: took 24.579911971s to libmachine.API.Create "newest-cni-538390"
	I0429 20:26:10.547522   73439 start.go:293] postStartSetup for "newest-cni-538390" (driver="kvm2")
	I0429 20:26:10.547535   73439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:26:10.547571   73439 main.go:141] libmachine: (newest-cni-538390) Calling .DriverName
	I0429 20:26:10.547823   73439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:26:10.547852   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHHostname
	I0429 20:26:10.550559   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.551030   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:10.551062   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.551223   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHPort
	I0429 20:26:10.551409   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:10.551573   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHUsername
	I0429 20:26:10.551747   73439 sshutil.go:53] new ssh client: &{IP:192.168.72.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390/id_rsa Username:docker}
	I0429 20:26:10.642680   73439 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:26:10.647351   73439 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:26:10.647375   73439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:26:10.647448   73439 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:26:10.647535   73439 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:26:10.647662   73439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:26:10.658833   73439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:26:10.685383   73439 start.go:296] duration metric: took 137.849949ms for postStartSetup
	I0429 20:26:10.685429   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetConfigRaw
	I0429 20:26:10.685999   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetIP
	I0429 20:26:10.689012   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.689411   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:10.689440   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.689788   73439 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/newest-cni-538390/config.json ...
	I0429 20:26:10.690033   73439 start.go:128] duration metric: took 24.742206544s to createHost
	I0429 20:26:10.690078   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHHostname
	I0429 20:26:10.692237   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.692655   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:10.692682   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.692865   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHPort
	I0429 20:26:10.693067   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:10.693250   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:10.693430   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHUsername
	I0429 20:26:10.693640   73439 main.go:141] libmachine: Using SSH client type: native
	I0429 20:26:10.693832   73439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.75 22 <nil> <nil>}
	I0429 20:26:10.693849   73439 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:26:10.811218   73439 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714422370.793514615
	
	I0429 20:26:10.811249   73439 fix.go:216] guest clock: 1714422370.793514615
	I0429 20:26:10.811259   73439 fix.go:229] Guest: 2024-04-29 20:26:10.793514615 +0000 UTC Remote: 2024-04-29 20:26:10.690048534 +0000 UTC m=+24.869328961 (delta=103.466081ms)
	I0429 20:26:10.811283   73439 fix.go:200] guest clock delta is within tolerance: 103.466081ms
	I0429 20:26:10.811288   73439 start.go:83] releasing machines lock for "newest-cni-538390", held for 24.863560959s
	I0429 20:26:10.811310   73439 main.go:141] libmachine: (newest-cni-538390) Calling .DriverName
	I0429 20:26:10.811587   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetIP
	I0429 20:26:10.814724   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.815086   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:10.815117   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.815311   73439 main.go:141] libmachine: (newest-cni-538390) Calling .DriverName
	I0429 20:26:10.815798   73439 main.go:141] libmachine: (newest-cni-538390) Calling .DriverName
	I0429 20:26:10.816013   73439 main.go:141] libmachine: (newest-cni-538390) Calling .DriverName
	I0429 20:26:10.816098   73439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:26:10.816144   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHHostname
	I0429 20:26:10.816276   73439 ssh_runner.go:195] Run: cat /version.json
	I0429 20:26:10.816301   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHHostname
	I0429 20:26:10.818871   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.819192   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:10.819218   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.819239   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.819431   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHPort
	I0429 20:26:10.819602   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:10.819694   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:10.819712   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:10.819774   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHUsername
	I0429 20:26:10.819894   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHPort
	I0429 20:26:10.819952   73439 sshutil.go:53] new ssh client: &{IP:192.168.72.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390/id_rsa Username:docker}
	I0429 20:26:10.820046   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHKeyPath
	I0429 20:26:10.820198   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetSSHUsername
	I0429 20:26:10.820323   73439 sshutil.go:53] new ssh client: &{IP:192.168.72.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/newest-cni-538390/id_rsa Username:docker}
	I0429 20:26:10.913863   73439 ssh_runner.go:195] Run: systemctl --version
	I0429 20:26:10.935917   73439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:26:11.113135   73439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:26:11.120999   73439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:26:11.121099   73439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:26:11.139127   73439 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:26:11.139157   73439 start.go:494] detecting cgroup driver to use...
	I0429 20:26:11.139230   73439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:26:11.155992   73439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:26:11.171021   73439 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:26:11.171088   73439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:26:11.187252   73439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:26:11.203726   73439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:26:11.342691   73439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:26:11.523773   73439 docker.go:233] disabling docker service ...
	I0429 20:26:11.523848   73439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:26:11.540665   73439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:26:11.555729   73439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:26:11.688464   73439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:26:11.810812   73439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:26:11.827030   73439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:26:11.850674   73439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:26:11.850738   73439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:26:11.862691   73439 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:26:11.862755   73439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:26:11.875556   73439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:26:11.891684   73439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:26:11.907582   73439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:26:11.921945   73439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:26:11.935190   73439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:26:11.963290   73439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:26:11.976095   73439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:26:11.987607   73439 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:26:11.987669   73439 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:26:12.004564   73439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:26:12.016209   73439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:26:12.148568   73439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:26:12.322269   73439 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:26:12.322341   73439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:26:12.328155   73439 start.go:562] Will wait 60s for crictl version
	I0429 20:26:12.328224   73439 ssh_runner.go:195] Run: which crictl
	I0429 20:26:12.333075   73439 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:26:12.380640   73439 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:26:12.380725   73439 ssh_runner.go:195] Run: crio --version
	I0429 20:26:12.430053   73439 ssh_runner.go:195] Run: crio --version
	I0429 20:26:12.470988   73439 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:26:12.472361   73439 main.go:141] libmachine: (newest-cni-538390) Calling .GetIP
	I0429 20:26:12.475361   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:12.475743   73439 main.go:141] libmachine: (newest-cni-538390) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:3f:41", ip: ""} in network mk-newest-cni-538390: {Iface:virbr4 ExpiryTime:2024-04-29 21:26:02 +0000 UTC Type:0 Mac:52:54:00:19:3f:41 Iaid: IPaddr:192.168.72.75 Prefix:24 Hostname:newest-cni-538390 Clientid:01:52:54:00:19:3f:41}
	I0429 20:26:12.475781   73439 main.go:141] libmachine: (newest-cni-538390) DBG | domain newest-cni-538390 has defined IP address 192.168.72.75 and MAC address 52:54:00:19:3f:41 in network mk-newest-cni-538390
	I0429 20:26:12.475961   73439 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0429 20:26:12.481072   73439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:26:12.497906   73439 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0429 20:26:10.813619   73820 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0429 20:26:10.813818   73820 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:26:10.813874   73820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:26:10.831588   73820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0429 20:26:10.832049   73820 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:26:10.832630   73820 main.go:141] libmachine: Using API Version  1
	I0429 20:26:10.832654   73820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:26:10.832995   73820 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:26:10.833192   73820 main.go:141] libmachine: (auto-870155) Calling .GetMachineName
	I0429 20:26:10.833355   73820 main.go:141] libmachine: (auto-870155) Calling .DriverName
	I0429 20:26:10.833524   73820 start.go:159] libmachine.API.Create for "auto-870155" (driver="kvm2")
	I0429 20:26:10.833559   73820 client.go:168] LocalClient.Create starting
	I0429 20:26:10.833593   73820 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem
	I0429 20:26:10.833638   73820 main.go:141] libmachine: Decoding PEM data...
	I0429 20:26:10.833657   73820 main.go:141] libmachine: Parsing certificate...
	I0429 20:26:10.833711   73820 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem
	I0429 20:26:10.833734   73820 main.go:141] libmachine: Decoding PEM data...
	I0429 20:26:10.833748   73820 main.go:141] libmachine: Parsing certificate...
	I0429 20:26:10.833764   73820 main.go:141] libmachine: Running pre-create checks...
	I0429 20:26:10.833778   73820 main.go:141] libmachine: (auto-870155) Calling .PreCreateCheck
	I0429 20:26:10.834142   73820 main.go:141] libmachine: (auto-870155) Calling .GetConfigRaw
	I0429 20:26:10.834587   73820 main.go:141] libmachine: Creating machine...
	I0429 20:26:10.834602   73820 main.go:141] libmachine: (auto-870155) Calling .Create
	I0429 20:26:10.834734   73820 main.go:141] libmachine: (auto-870155) Creating KVM machine...
	I0429 20:26:10.835813   73820 main.go:141] libmachine: (auto-870155) DBG | found existing default KVM network
	I0429 20:26:10.837769   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:10.837608   73903 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015950}
	I0429 20:26:10.837800   73820 main.go:141] libmachine: (auto-870155) DBG | created network xml: 
	I0429 20:26:10.837812   73820 main.go:141] libmachine: (auto-870155) DBG | <network>
	I0429 20:26:10.837826   73820 main.go:141] libmachine: (auto-870155) DBG |   <name>mk-auto-870155</name>
	I0429 20:26:10.837835   73820 main.go:141] libmachine: (auto-870155) DBG |   <dns enable='no'/>
	I0429 20:26:10.837842   73820 main.go:141] libmachine: (auto-870155) DBG |   
	I0429 20:26:10.837853   73820 main.go:141] libmachine: (auto-870155) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0429 20:26:10.837865   73820 main.go:141] libmachine: (auto-870155) DBG |     <dhcp>
	I0429 20:26:10.837876   73820 main.go:141] libmachine: (auto-870155) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0429 20:26:10.837891   73820 main.go:141] libmachine: (auto-870155) DBG |     </dhcp>
	I0429 20:26:10.837918   73820 main.go:141] libmachine: (auto-870155) DBG |   </ip>
	I0429 20:26:10.837944   73820 main.go:141] libmachine: (auto-870155) DBG |   
	I0429 20:26:10.837953   73820 main.go:141] libmachine: (auto-870155) DBG | </network>
	I0429 20:26:10.837959   73820 main.go:141] libmachine: (auto-870155) DBG | 
	I0429 20:26:10.843515   73820 main.go:141] libmachine: (auto-870155) DBG | trying to create private KVM network mk-auto-870155 192.168.39.0/24...
	I0429 20:26:10.919826   73820 main.go:141] libmachine: (auto-870155) DBG | private KVM network mk-auto-870155 192.168.39.0/24 created
	I0429 20:26:10.919872   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:10.919804   73903 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:26:10.919885   73820 main.go:141] libmachine: (auto-870155) Setting up store path in /home/jenkins/minikube-integration/18774-7754/.minikube/machines/auto-870155 ...
	I0429 20:26:10.919903   73820 main.go:141] libmachine: (auto-870155) Building disk image from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 20:26:10.919992   73820 main.go:141] libmachine: (auto-870155) Downloading /home/jenkins/minikube-integration/18774-7754/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 20:26:11.159876   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:11.159764   73903 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/auto-870155/id_rsa...
	I0429 20:26:11.272640   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:11.272518   73903 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/auto-870155/auto-870155.rawdisk...
	I0429 20:26:11.272681   73820 main.go:141] libmachine: (auto-870155) DBG | Writing magic tar header
	I0429 20:26:11.272694   73820 main.go:141] libmachine: (auto-870155) DBG | Writing SSH key tar header
	I0429 20:26:11.272710   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:11.272664   73903 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/auto-870155 ...
	I0429 20:26:11.272841   73820 main.go:141] libmachine: (auto-870155) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/auto-870155
	I0429 20:26:11.272868   73820 main.go:141] libmachine: (auto-870155) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines/auto-870155 (perms=drwx------)
	I0429 20:26:11.272895   73820 main.go:141] libmachine: (auto-870155) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube/machines
	I0429 20:26:11.272914   73820 main.go:141] libmachine: (auto-870155) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:26:11.272927   73820 main.go:141] libmachine: (auto-870155) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18774-7754
	I0429 20:26:11.272941   73820 main.go:141] libmachine: (auto-870155) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0429 20:26:11.272959   73820 main.go:141] libmachine: (auto-870155) DBG | Checking permissions on dir: /home/jenkins
	I0429 20:26:11.272973   73820 main.go:141] libmachine: (auto-870155) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube/machines (perms=drwxr-xr-x)
	I0429 20:26:11.272992   73820 main.go:141] libmachine: (auto-870155) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754/.minikube (perms=drwxr-xr-x)
	I0429 20:26:11.273002   73820 main.go:141] libmachine: (auto-870155) Setting executable bit set on /home/jenkins/minikube-integration/18774-7754 (perms=drwxrwxr-x)
	I0429 20:26:11.273012   73820 main.go:141] libmachine: (auto-870155) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0429 20:26:11.273020   73820 main.go:141] libmachine: (auto-870155) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0429 20:26:11.273030   73820 main.go:141] libmachine: (auto-870155) Creating domain...
	I0429 20:26:11.273051   73820 main.go:141] libmachine: (auto-870155) DBG | Checking permissions on dir: /home
	I0429 20:26:11.273073   73820 main.go:141] libmachine: (auto-870155) DBG | Skipping /home - not owner
	I0429 20:26:11.274257   73820 main.go:141] libmachine: (auto-870155) define libvirt domain using xml: 
	I0429 20:26:11.274278   73820 main.go:141] libmachine: (auto-870155) <domain type='kvm'>
	I0429 20:26:11.274285   73820 main.go:141] libmachine: (auto-870155)   <name>auto-870155</name>
	I0429 20:26:11.274290   73820 main.go:141] libmachine: (auto-870155)   <memory unit='MiB'>3072</memory>
	I0429 20:26:11.274296   73820 main.go:141] libmachine: (auto-870155)   <vcpu>2</vcpu>
	I0429 20:26:11.274300   73820 main.go:141] libmachine: (auto-870155)   <features>
	I0429 20:26:11.274308   73820 main.go:141] libmachine: (auto-870155)     <acpi/>
	I0429 20:26:11.274314   73820 main.go:141] libmachine: (auto-870155)     <apic/>
	I0429 20:26:11.274323   73820 main.go:141] libmachine: (auto-870155)     <pae/>
	I0429 20:26:11.274351   73820 main.go:141] libmachine: (auto-870155)     
	I0429 20:26:11.274358   73820 main.go:141] libmachine: (auto-870155)   </features>
	I0429 20:26:11.274363   73820 main.go:141] libmachine: (auto-870155)   <cpu mode='host-passthrough'>
	I0429 20:26:11.274368   73820 main.go:141] libmachine: (auto-870155)   
	I0429 20:26:11.274372   73820 main.go:141] libmachine: (auto-870155)   </cpu>
	I0429 20:26:11.274377   73820 main.go:141] libmachine: (auto-870155)   <os>
	I0429 20:26:11.274381   73820 main.go:141] libmachine: (auto-870155)     <type>hvm</type>
	I0429 20:26:11.274386   73820 main.go:141] libmachine: (auto-870155)     <boot dev='cdrom'/>
	I0429 20:26:11.274410   73820 main.go:141] libmachine: (auto-870155)     <boot dev='hd'/>
	I0429 20:26:11.274433   73820 main.go:141] libmachine: (auto-870155)     <bootmenu enable='no'/>
	I0429 20:26:11.274439   73820 main.go:141] libmachine: (auto-870155)   </os>
	I0429 20:26:11.274459   73820 main.go:141] libmachine: (auto-870155)   <devices>
	I0429 20:26:11.274471   73820 main.go:141] libmachine: (auto-870155)     <disk type='file' device='cdrom'>
	I0429 20:26:11.274489   73820 main.go:141] libmachine: (auto-870155)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/auto-870155/boot2docker.iso'/>
	I0429 20:26:11.274501   73820 main.go:141] libmachine: (auto-870155)       <target dev='hdc' bus='scsi'/>
	I0429 20:26:11.274511   73820 main.go:141] libmachine: (auto-870155)       <readonly/>
	I0429 20:26:11.274529   73820 main.go:141] libmachine: (auto-870155)     </disk>
	I0429 20:26:11.274543   73820 main.go:141] libmachine: (auto-870155)     <disk type='file' device='disk'>
	I0429 20:26:11.274554   73820 main.go:141] libmachine: (auto-870155)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0429 20:26:11.274571   73820 main.go:141] libmachine: (auto-870155)       <source file='/home/jenkins/minikube-integration/18774-7754/.minikube/machines/auto-870155/auto-870155.rawdisk'/>
	I0429 20:26:11.274583   73820 main.go:141] libmachine: (auto-870155)       <target dev='hda' bus='virtio'/>
	I0429 20:26:11.274595   73820 main.go:141] libmachine: (auto-870155)     </disk>
	I0429 20:26:11.274604   73820 main.go:141] libmachine: (auto-870155)     <interface type='network'>
	I0429 20:26:11.274618   73820 main.go:141] libmachine: (auto-870155)       <source network='mk-auto-870155'/>
	I0429 20:26:11.274629   73820 main.go:141] libmachine: (auto-870155)       <model type='virtio'/>
	I0429 20:26:11.274640   73820 main.go:141] libmachine: (auto-870155)     </interface>
	I0429 20:26:11.274649   73820 main.go:141] libmachine: (auto-870155)     <interface type='network'>
	I0429 20:26:11.274661   73820 main.go:141] libmachine: (auto-870155)       <source network='default'/>
	I0429 20:26:11.274673   73820 main.go:141] libmachine: (auto-870155)       <model type='virtio'/>
	I0429 20:26:11.274682   73820 main.go:141] libmachine: (auto-870155)     </interface>
	I0429 20:26:11.274693   73820 main.go:141] libmachine: (auto-870155)     <serial type='pty'>
	I0429 20:26:11.274703   73820 main.go:141] libmachine: (auto-870155)       <target port='0'/>
	I0429 20:26:11.274713   73820 main.go:141] libmachine: (auto-870155)     </serial>
	I0429 20:26:11.274727   73820 main.go:141] libmachine: (auto-870155)     <console type='pty'>
	I0429 20:26:11.274738   73820 main.go:141] libmachine: (auto-870155)       <target type='serial' port='0'/>
	I0429 20:26:11.274747   73820 main.go:141] libmachine: (auto-870155)     </console>
	I0429 20:26:11.274754   73820 main.go:141] libmachine: (auto-870155)     <rng model='virtio'>
	I0429 20:26:11.274768   73820 main.go:141] libmachine: (auto-870155)       <backend model='random'>/dev/random</backend>
	I0429 20:26:11.274775   73820 main.go:141] libmachine: (auto-870155)     </rng>
	I0429 20:26:11.274783   73820 main.go:141] libmachine: (auto-870155)     
	I0429 20:26:11.274798   73820 main.go:141] libmachine: (auto-870155)     
	I0429 20:26:11.274811   73820 main.go:141] libmachine: (auto-870155)   </devices>
	I0429 20:26:11.274822   73820 main.go:141] libmachine: (auto-870155) </domain>
	I0429 20:26:11.274833   73820 main.go:141] libmachine: (auto-870155) 
	I0429 20:26:11.279326   73820 main.go:141] libmachine: (auto-870155) DBG | domain auto-870155 has defined MAC address 52:54:00:f4:79:17 in network default
	I0429 20:26:11.280010   73820 main.go:141] libmachine: (auto-870155) DBG | domain auto-870155 has defined MAC address 52:54:00:57:08:4c in network mk-auto-870155
	I0429 20:26:11.280040   73820 main.go:141] libmachine: (auto-870155) Ensuring networks are active...
	I0429 20:26:11.280690   73820 main.go:141] libmachine: (auto-870155) Ensuring network default is active
	I0429 20:26:11.281066   73820 main.go:141] libmachine: (auto-870155) Ensuring network mk-auto-870155 is active
	I0429 20:26:11.281674   73820 main.go:141] libmachine: (auto-870155) Getting domain xml...
	I0429 20:26:11.282506   73820 main.go:141] libmachine: (auto-870155) Creating domain...
	I0429 20:26:12.623907   73820 main.go:141] libmachine: (auto-870155) Waiting to get IP...
	I0429 20:26:12.624641   73820 main.go:141] libmachine: (auto-870155) DBG | domain auto-870155 has defined MAC address 52:54:00:57:08:4c in network mk-auto-870155
	I0429 20:26:12.625092   73820 main.go:141] libmachine: (auto-870155) DBG | unable to find current IP address of domain auto-870155 in network mk-auto-870155
	I0429 20:26:12.625115   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:12.625052   73903 retry.go:31] will retry after 198.005053ms: waiting for machine to come up
	I0429 20:26:12.824829   73820 main.go:141] libmachine: (auto-870155) DBG | domain auto-870155 has defined MAC address 52:54:00:57:08:4c in network mk-auto-870155
	I0429 20:26:12.825449   73820 main.go:141] libmachine: (auto-870155) DBG | unable to find current IP address of domain auto-870155 in network mk-auto-870155
	I0429 20:26:12.825481   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:12.825417   73903 retry.go:31] will retry after 262.866324ms: waiting for machine to come up
	I0429 20:26:13.090202   73820 main.go:141] libmachine: (auto-870155) DBG | domain auto-870155 has defined MAC address 52:54:00:57:08:4c in network mk-auto-870155
	I0429 20:26:13.091057   73820 main.go:141] libmachine: (auto-870155) DBG | unable to find current IP address of domain auto-870155 in network mk-auto-870155
	I0429 20:26:13.091079   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:13.090973   73903 retry.go:31] will retry after 437.518685ms: waiting for machine to come up
	I0429 20:26:12.499207   73439 kubeadm.go:877] updating cluster {Name:newest-cni-538390 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:newest-cni-538390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.75 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:26:12.499329   73439 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:26:12.499402   73439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:26:12.542170   73439 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:26:12.542233   73439 ssh_runner.go:195] Run: which lz4
	I0429 20:26:12.547055   73439 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:26:12.552434   73439 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:26:12.552457   73439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:26:14.415197   73439 crio.go:462] duration metric: took 1.86817763s to copy over tarball
	I0429 20:26:14.415271   73439 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:26:13.530736   73820 main.go:141] libmachine: (auto-870155) DBG | domain auto-870155 has defined MAC address 52:54:00:57:08:4c in network mk-auto-870155
	I0429 20:26:13.531338   73820 main.go:141] libmachine: (auto-870155) DBG | unable to find current IP address of domain auto-870155 in network mk-auto-870155
	I0429 20:26:13.531368   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:13.531257   73903 retry.go:31] will retry after 552.575221ms: waiting for machine to come up
	I0429 20:26:14.086906   73820 main.go:141] libmachine: (auto-870155) DBG | domain auto-870155 has defined MAC address 52:54:00:57:08:4c in network mk-auto-870155
	I0429 20:26:14.087524   73820 main.go:141] libmachine: (auto-870155) DBG | unable to find current IP address of domain auto-870155 in network mk-auto-870155
	I0429 20:26:14.087569   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:14.087492   73903 retry.go:31] will retry after 622.435587ms: waiting for machine to come up
	I0429 20:26:14.711207   73820 main.go:141] libmachine: (auto-870155) DBG | domain auto-870155 has defined MAC address 52:54:00:57:08:4c in network mk-auto-870155
	I0429 20:26:14.711688   73820 main.go:141] libmachine: (auto-870155) DBG | unable to find current IP address of domain auto-870155 in network mk-auto-870155
	I0429 20:26:14.711722   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:14.711630   73903 retry.go:31] will retry after 877.171394ms: waiting for machine to come up
	I0429 20:26:15.590341   73820 main.go:141] libmachine: (auto-870155) DBG | domain auto-870155 has defined MAC address 52:54:00:57:08:4c in network mk-auto-870155
	I0429 20:26:15.590886   73820 main.go:141] libmachine: (auto-870155) DBG | unable to find current IP address of domain auto-870155 in network mk-auto-870155
	I0429 20:26:15.590925   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:15.590852   73903 retry.go:31] will retry after 769.567405ms: waiting for machine to come up
	I0429 20:26:16.362449   73820 main.go:141] libmachine: (auto-870155) DBG | domain auto-870155 has defined MAC address 52:54:00:57:08:4c in network mk-auto-870155
	I0429 20:26:16.362964   73820 main.go:141] libmachine: (auto-870155) DBG | unable to find current IP address of domain auto-870155 in network mk-auto-870155
	I0429 20:26:16.362998   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:16.362925   73903 retry.go:31] will retry after 1.278438588s: waiting for machine to come up
	I0429 20:26:17.643334   73820 main.go:141] libmachine: (auto-870155) DBG | domain auto-870155 has defined MAC address 52:54:00:57:08:4c in network mk-auto-870155
	I0429 20:26:17.643757   73820 main.go:141] libmachine: (auto-870155) DBG | unable to find current IP address of domain auto-870155 in network mk-auto-870155
	I0429 20:26:17.643802   73820 main.go:141] libmachine: (auto-870155) DBG | I0429 20:26:17.643742   73903 retry.go:31] will retry after 1.830694921s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.822304668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422379822278757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49f90b88-d1c8-4860-8abc-888e35cc713e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.823075092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9e91b55-82a2-441b-9cc0-5a6a9a927715 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.823171739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9e91b55-82a2-441b-9cc0-5a6a9a927715 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.823553409Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09da5e7f79c2ee756da6d7c8cf7a9ec0b14bdc89660de0be5a1789c9837fd07,PodSandboxId:13592af169e448e5456d1d29dc85bd4eedcec210384f808c4c2539706bd88a20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521967460227,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr6bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d14ff20-6dab-4c02-b91c-0a1e326f1593,},Annotations:map[string]string{io.kubernetes.container.hash: 91deb564,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597fd296f206b84c8ad021a50f3526c8b69470bcd90ac39ae7a40306854ac9ab,PodSandboxId:6c2a642e889be4553156c6036285037a1636412f1eae02d2922255a6918550aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521752653889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7z6zv,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 422451a2-615d-4bf8-8de8-d5fa5805219f,},Annotations:map[string]string{io.kubernetes.container.hash: 87b952de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33707b709281cf6d469a14ea10a8cb2fb05aef0c451ee7f796955d8b2427f31c,PodSandboxId:bdfdecde861bfd2cf502c71fcd70c011782565210ed637fce8516949fd5dc98c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNI
NG,CreatedAt:1714421521336688944,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wq48j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b3b23ef-b5b4-4754-bc44-73e1d51a18d7,},Annotations:map[string]string{io.kubernetes.container.hash: ffdf8adb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4c99c955ac14fd43f2860e60f90fbf6dc91c1a2bbbc6b25a4d5172dd64b414c,PodSandboxId:6161d1c61f8548c2bb80e7a990b2f11c843286c32dcf6abeebe77d1a04416ec5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17144215212
90879391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93e046a1-3867-44e1-8a4f-cf0eba6dfd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a656cc1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:033c21bf724950eb59ec37c01840cbebc97390462ad40103725deafe34097f6b,PodSandboxId:d2fe13c2e877279ab6de3e9b96103e8eea857ea9db5192cf6171e22de3109a13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421500465616080,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36865aa59e33dd34dad6ead2415cbd18,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c91d1f0aa2317ec388dc984455f7fb8ba9122c34b93beeab627bb543f4130e5,PodSandboxId:5aa89d2eb3f7230b08418ea015fb01e19fa14a7215fc209c1091595934e5df5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421500432041375,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7ea45965b21a7a2a5f5deef15a1c2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 62a4f4c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0c3731b411f006dfdb676571885a831207d11b62ed4444e5a6c3e610ec16f1,PodSandboxId:08d9c94bbc65edcd3a4b048af68505b557a2a0af7d162ccffc74067949576229,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421500381505262,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7ec996aacb64787a59cb6e9e29694d7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc25e7d837b61d7d50a1dd053ffb81a7f6d7f77c27275ac7d1dad349bcac838,PodSandboxId:9b4013dcd5ac92b83f45f2965cf266016c5274d6239a53d06bd2ca7a432fb501,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421500327618152,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7fa20f1275f39c0dbd2f28238557da,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b9e91b55-82a2-441b-9cc0-5a6a9a927715 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.879087513Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=260a589a-73bf-4a22-9606-f04cab59261b name=/runtime.v1.RuntimeService/Version
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.879345481Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=260a589a-73bf-4a22-9606-f04cab59261b name=/runtime.v1.RuntimeService/Version
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.881662843Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2984868c-917a-4036-9610-923d5dbf46f4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.882165006Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422379882138592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2984868c-917a-4036-9610-923d5dbf46f4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.883232545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b039373-2a5f-499e-af16-6542644b83e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.883697869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b039373-2a5f-499e-af16-6542644b83e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.884182956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09da5e7f79c2ee756da6d7c8cf7a9ec0b14bdc89660de0be5a1789c9837fd07,PodSandboxId:13592af169e448e5456d1d29dc85bd4eedcec210384f808c4c2539706bd88a20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521967460227,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr6bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d14ff20-6dab-4c02-b91c-0a1e326f1593,},Annotations:map[string]string{io.kubernetes.container.hash: 91deb564,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597fd296f206b84c8ad021a50f3526c8b69470bcd90ac39ae7a40306854ac9ab,PodSandboxId:6c2a642e889be4553156c6036285037a1636412f1eae02d2922255a6918550aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521752653889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7z6zv,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 422451a2-615d-4bf8-8de8-d5fa5805219f,},Annotations:map[string]string{io.kubernetes.container.hash: 87b952de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33707b709281cf6d469a14ea10a8cb2fb05aef0c451ee7f796955d8b2427f31c,PodSandboxId:bdfdecde861bfd2cf502c71fcd70c011782565210ed637fce8516949fd5dc98c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNI
NG,CreatedAt:1714421521336688944,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wq48j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b3b23ef-b5b4-4754-bc44-73e1d51a18d7,},Annotations:map[string]string{io.kubernetes.container.hash: ffdf8adb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4c99c955ac14fd43f2860e60f90fbf6dc91c1a2bbbc6b25a4d5172dd64b414c,PodSandboxId:6161d1c61f8548c2bb80e7a990b2f11c843286c32dcf6abeebe77d1a04416ec5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17144215212
90879391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93e046a1-3867-44e1-8a4f-cf0eba6dfd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a656cc1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:033c21bf724950eb59ec37c01840cbebc97390462ad40103725deafe34097f6b,PodSandboxId:d2fe13c2e877279ab6de3e9b96103e8eea857ea9db5192cf6171e22de3109a13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421500465616080,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36865aa59e33dd34dad6ead2415cbd18,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c91d1f0aa2317ec388dc984455f7fb8ba9122c34b93beeab627bb543f4130e5,PodSandboxId:5aa89d2eb3f7230b08418ea015fb01e19fa14a7215fc209c1091595934e5df5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421500432041375,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7ea45965b21a7a2a5f5deef15a1c2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 62a4f4c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0c3731b411f006dfdb676571885a831207d11b62ed4444e5a6c3e610ec16f1,PodSandboxId:08d9c94bbc65edcd3a4b048af68505b557a2a0af7d162ccffc74067949576229,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421500381505262,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7ec996aacb64787a59cb6e9e29694d7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc25e7d837b61d7d50a1dd053ffb81a7f6d7f77c27275ac7d1dad349bcac838,PodSandboxId:9b4013dcd5ac92b83f45f2965cf266016c5274d6239a53d06bd2ca7a432fb501,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421500327618152,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7fa20f1275f39c0dbd2f28238557da,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b039373-2a5f-499e-af16-6542644b83e7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.940257703Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87757def-6d98-49fd-9089-ee8c003e29c5 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.940368408Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87757def-6d98-49fd-9089-ee8c003e29c5 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.942096224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2180c812-43d1-4cd9-a985-644f90bfe9b0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.942509369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422379942486165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2180c812-43d1-4cd9-a985-644f90bfe9b0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.943303590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6dd0a30-d274-42d9-86e5-4f5c1bd3a2ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.943399125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6dd0a30-d274-42d9-86e5-4f5c1bd3a2ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.944248693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09da5e7f79c2ee756da6d7c8cf7a9ec0b14bdc89660de0be5a1789c9837fd07,PodSandboxId:13592af169e448e5456d1d29dc85bd4eedcec210384f808c4c2539706bd88a20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521967460227,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr6bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d14ff20-6dab-4c02-b91c-0a1e326f1593,},Annotations:map[string]string{io.kubernetes.container.hash: 91deb564,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597fd296f206b84c8ad021a50f3526c8b69470bcd90ac39ae7a40306854ac9ab,PodSandboxId:6c2a642e889be4553156c6036285037a1636412f1eae02d2922255a6918550aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521752653889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7z6zv,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 422451a2-615d-4bf8-8de8-d5fa5805219f,},Annotations:map[string]string{io.kubernetes.container.hash: 87b952de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33707b709281cf6d469a14ea10a8cb2fb05aef0c451ee7f796955d8b2427f31c,PodSandboxId:bdfdecde861bfd2cf502c71fcd70c011782565210ed637fce8516949fd5dc98c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNI
NG,CreatedAt:1714421521336688944,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wq48j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b3b23ef-b5b4-4754-bc44-73e1d51a18d7,},Annotations:map[string]string{io.kubernetes.container.hash: ffdf8adb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4c99c955ac14fd43f2860e60f90fbf6dc91c1a2bbbc6b25a4d5172dd64b414c,PodSandboxId:6161d1c61f8548c2bb80e7a990b2f11c843286c32dcf6abeebe77d1a04416ec5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17144215212
90879391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93e046a1-3867-44e1-8a4f-cf0eba6dfd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a656cc1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:033c21bf724950eb59ec37c01840cbebc97390462ad40103725deafe34097f6b,PodSandboxId:d2fe13c2e877279ab6de3e9b96103e8eea857ea9db5192cf6171e22de3109a13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421500465616080,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36865aa59e33dd34dad6ead2415cbd18,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c91d1f0aa2317ec388dc984455f7fb8ba9122c34b93beeab627bb543f4130e5,PodSandboxId:5aa89d2eb3f7230b08418ea015fb01e19fa14a7215fc209c1091595934e5df5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421500432041375,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7ea45965b21a7a2a5f5deef15a1c2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 62a4f4c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0c3731b411f006dfdb676571885a831207d11b62ed4444e5a6c3e610ec16f1,PodSandboxId:08d9c94bbc65edcd3a4b048af68505b557a2a0af7d162ccffc74067949576229,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421500381505262,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7ec996aacb64787a59cb6e9e29694d7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc25e7d837b61d7d50a1dd053ffb81a7f6d7f77c27275ac7d1dad349bcac838,PodSandboxId:9b4013dcd5ac92b83f45f2965cf266016c5274d6239a53d06bd2ca7a432fb501,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421500327618152,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7fa20f1275f39c0dbd2f28238557da,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6dd0a30-d274-42d9-86e5-4f5c1bd3a2ec name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.988619440Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=62af3eda-fff2-4b3c-b684-6847c501f2bd name=/runtime.v1.RuntimeService/Version
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.989173185Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=62af3eda-fff2-4b3c-b684-6847c501f2bd name=/runtime.v1.RuntimeService/Version
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.990631143Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d1fb1eb-3da0-47c3-a65a-671db2fb7f60 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.991339891Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422379991311719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d1fb1eb-3da0-47c3-a65a-671db2fb7f60 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.993974170Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0d35c90-c816-45ac-8f7b-65f5fd4d3d7a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.994060265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0d35c90-c816-45ac-8f7b-65f5fd4d3d7a name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:26:19 embed-certs-161370 crio[726]: time="2024-04-29 20:26:19.994261688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f09da5e7f79c2ee756da6d7c8cf7a9ec0b14bdc89660de0be5a1789c9837fd07,PodSandboxId:13592af169e448e5456d1d29dc85bd4eedcec210384f808c4c2539706bd88a20,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521967460227,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rr6bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d14ff20-6dab-4c02-b91c-0a1e326f1593,},Annotations:map[string]string{io.kubernetes.container.hash: 91deb564,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:597fd296f206b84c8ad021a50f3526c8b69470bcd90ac39ae7a40306854ac9ab,PodSandboxId:6c2a642e889be4553156c6036285037a1636412f1eae02d2922255a6918550aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1714421521752653889,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7z6zv,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 422451a2-615d-4bf8-8de8-d5fa5805219f,},Annotations:map[string]string{io.kubernetes.container.hash: 87b952de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33707b709281cf6d469a14ea10a8cb2fb05aef0c451ee7f796955d8b2427f31c,PodSandboxId:bdfdecde861bfd2cf502c71fcd70c011782565210ed637fce8516949fd5dc98c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNI
NG,CreatedAt:1714421521336688944,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wq48j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b3b23ef-b5b4-4754-bc44-73e1d51a18d7,},Annotations:map[string]string{io.kubernetes.container.hash: ffdf8adb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4c99c955ac14fd43f2860e60f90fbf6dc91c1a2bbbc6b25a4d5172dd64b414c,PodSandboxId:6161d1c61f8548c2bb80e7a990b2f11c843286c32dcf6abeebe77d1a04416ec5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:17144215212
90879391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93e046a1-3867-44e1-8a4f-cf0eba6dfd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3a656cc1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:033c21bf724950eb59ec37c01840cbebc97390462ad40103725deafe34097f6b,PodSandboxId:d2fe13c2e877279ab6de3e9b96103e8eea857ea9db5192cf6171e22de3109a13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1714421500465616080,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36865aa59e33dd34dad6ead2415cbd18,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c91d1f0aa2317ec388dc984455f7fb8ba9122c34b93beeab627bb543f4130e5,PodSandboxId:5aa89d2eb3f7230b08418ea015fb01e19fa14a7215fc209c1091595934e5df5f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1714421500432041375,Labels:map[string]string{io.ku
bernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7ea45965b21a7a2a5f5deef15a1c2cd,},Annotations:map[string]string{io.kubernetes.container.hash: 62a4f4c4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a0c3731b411f006dfdb676571885a831207d11b62ed4444e5a6c3e610ec16f1,PodSandboxId:08d9c94bbc65edcd3a4b048af68505b557a2a0af7d162ccffc74067949576229,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1714421500381505262,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7ec996aacb64787a59cb6e9e29694d7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4bc25e7d837b61d7d50a1dd053ffb81a7f6d7f77c27275ac7d1dad349bcac838,PodSandboxId:9b4013dcd5ac92b83f45f2965cf266016c5274d6239a53d06bd2ca7a432fb501,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1714421500327618152,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-161370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c7fa20f1275f39c0dbd2f28238557da,},Annotations:map[string]string{io.kubernetes.container.hash: 293887a1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0d35c90-c816-45ac-8f7b-65f5fd4d3d7a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f09da5e7f79c2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   13592af169e44       coredns-7db6d8ff4d-rr6bd
	597fd296f206b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   6c2a642e889be       coredns-7db6d8ff4d-7z6zv
	33707b709281c       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   14 minutes ago      Running             kube-proxy                0                   bdfdecde861bf       kube-proxy-wq48j
	d4c99c955ac14       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   6161d1c61f854       storage-provisioner
	033c21bf72495       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   14 minutes ago      Running             kube-scheduler            2                   d2fe13c2e8772       kube-scheduler-embed-certs-161370
	2c91d1f0aa231       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   5aa89d2eb3f72       etcd-embed-certs-161370
	9a0c3731b411f       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   14 minutes ago      Running             kube-controller-manager   2                   08d9c94bbc65e       kube-controller-manager-embed-certs-161370
	4bc25e7d837b6       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   14 minutes ago      Running             kube-apiserver            2                   9b4013dcd5ac9       kube-apiserver-embed-certs-161370
	
	
	==> coredns [597fd296f206b84c8ad021a50f3526c8b69470bcd90ac39ae7a40306854ac9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f09da5e7f79c2ee756da6d7c8cf7a9ec0b14bdc89660de0be5a1789c9837fd07] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-161370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-161370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=embed-certs-161370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T20_11_46_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 20:11:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-161370
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 20:26:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 20:22:19 +0000   Mon, 29 Apr 2024 20:11:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 20:22:19 +0000   Mon, 29 Apr 2024 20:11:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 20:22:19 +0000   Mon, 29 Apr 2024 20:11:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 20:22:19 +0000   Mon, 29 Apr 2024 20:11:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.184
	  Hostname:    embed-certs-161370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b833b658df1947c7910ffce5e3af6ef9
	  System UUID:                b833b658-df19-47c7-910f-fce5e3af6ef9
	  Boot ID:                    e66f7a71-4f64-4c86-bf6e-31a74b9aadc6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-7z6zv                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-rr6bd                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-embed-certs-161370                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-embed-certs-161370             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-embed-certs-161370    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-wq48j                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-embed-certs-161370             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-569cc877fc-x2wb6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node embed-certs-161370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node embed-certs-161370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node embed-certs-161370 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node embed-certs-161370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node embed-certs-161370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node embed-certs-161370 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node embed-certs-161370 event: Registered Node embed-certs-161370 in Controller
	
	
	==> dmesg <==
	[  +0.043855] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.981581] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.673771] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.750022] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.105719] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.069611] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064816] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.197537] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.167047] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.345200] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +5.227718] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +0.069840] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.973645] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +5.609639] kauditd_printk_skb: 97 callbacks suppressed
	[Apr29 20:07] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.013986] kauditd_printk_skb: 22 callbacks suppressed
	[Apr29 20:11] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.935620] systemd-fstab-generator[3583]: Ignoring "noauto" option for root device
	[  +4.436884] kauditd_printk_skb: 57 callbacks suppressed
	[  +2.122430] systemd-fstab-generator[3905]: Ignoring "noauto" option for root device
	[ +13.977684] systemd-fstab-generator[4108]: Ignoring "noauto" option for root device
	[  +0.084109] kauditd_printk_skb: 14 callbacks suppressed
	[Apr29 20:13] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [2c91d1f0aa2317ec388dc984455f7fb8ba9122c34b93beeab627bb543f4130e5] <==
	{"level":"info","ts":"2024-04-29T20:11:40.870337Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"bf2ced3b97aa693f","initial-advertise-peer-urls":["https://192.168.50.184:2380"],"listen-peer-urls":["https://192.168.50.184:2380"],"advertise-client-urls":["https://192.168.50.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T20:11:40.870483Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T20:11:40.870829Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.184:2380"}
	{"level":"info","ts":"2024-04-29T20:11:40.872836Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.184:2380"}
	{"level":"info","ts":"2024-04-29T20:11:41.78343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T20:11:41.783507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T20:11:41.783529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f received MsgPreVoteResp from bf2ced3b97aa693f at term 1"}
	{"level":"info","ts":"2024-04-29T20:11:41.783541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T20:11:41.783547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f received MsgVoteResp from bf2ced3b97aa693f at term 2"}
	{"level":"info","ts":"2024-04-29T20:11:41.783554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f became leader at term 2"}
	{"level":"info","ts":"2024-04-29T20:11:41.783571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bf2ced3b97aa693f elected leader bf2ced3b97aa693f at term 2"}
	{"level":"info","ts":"2024-04-29T20:11:41.785588Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:11:41.787044Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"bf2ced3b97aa693f","local-member-attributes":"{Name:embed-certs-161370 ClientURLs:[https://192.168.50.184:2379]}","request-path":"/0/members/bf2ced3b97aa693f/attributes","cluster-id":"dfaeaf2ad25a061e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T20:11:41.787242Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:11:41.787264Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T20:11:41.787407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dfaeaf2ad25a061e","local-member-id":"bf2ced3b97aa693f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:11:41.788528Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:11:41.78859Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T20:11:41.790445Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T20:11:41.787628Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T20:11:41.790599Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T20:11:41.792128Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.184:2379"}
	{"level":"info","ts":"2024-04-29T20:21:41.850145Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-04-29T20:21:41.859954Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":714,"took":"9.415769ms","hash":3419219816,"current-db-size-bytes":2383872,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2383872,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-29T20:21:41.860027Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3419219816,"revision":714,"compact-revision":-1}
	
	
	==> kernel <==
	 20:26:20 up 19 min,  0 users,  load average: 0.43, 0.40, 0.34
	Linux embed-certs-161370 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4bc25e7d837b61d7d50a1dd053ffb81a7f6d7f77c27275ac7d1dad349bcac838] <==
	I0429 20:19:44.281220       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:21:43.283031       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:21:43.283323       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0429 20:21:44.283912       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:21:44.284056       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:21:44.284077       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:21:44.284162       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:21:44.284184       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:21:44.285295       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:22:44.285154       1 handler_proxy.go:93] no RequestInfo found in the context
	W0429 20:22:44.285477       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:22:44.285542       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:22:44.285569       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0429 20:22:44.285814       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:22:44.287106       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:24:44.286613       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:24:44.286691       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0429 20:24:44.286699       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0429 20:24:44.288003       1 handler_proxy.go:93] no RequestInfo found in the context
	E0429 20:24:44.288182       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0429 20:24:44.288262       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9a0c3731b411f006dfdb676571885a831207d11b62ed4444e5a6c3e610ec16f1] <==
	I0429 20:20:29.311445       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:20:58.860943       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:20:59.322615       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:21:28.866630       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:21:29.331931       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:21:58.872300       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:21:59.341035       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:22:28.879277       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:22:29.351973       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:22:58.885860       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:22:59.361052       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0429 20:23:13.245976       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="2.249962ms"
	I0429 20:23:28.242311       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="116.608µs"
	E0429 20:23:28.894185       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:23:29.370400       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:23:58.902542       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:23:59.380499       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:24:28.909278       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:24:29.389586       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:24:58.915084       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:24:59.402089       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:25:28.920397       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:25:29.410454       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0429 20:25:58.932097       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0429 20:25:59.421206       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [33707b709281cf6d469a14ea10a8cb2fb05aef0c451ee7f796955d8b2427f31c] <==
	I0429 20:12:01.958964       1 server_linux.go:69] "Using iptables proxy"
	I0429 20:12:01.989284       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.184"]
	I0429 20:12:02.150512       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 20:12:02.150587       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 20:12:02.150613       1 server_linux.go:165] "Using iptables Proxier"
	I0429 20:12:02.159576       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 20:12:02.159886       1 server.go:872] "Version info" version="v1.30.0"
	I0429 20:12:02.159942       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 20:12:02.161088       1 config.go:192] "Starting service config controller"
	I0429 20:12:02.161132       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 20:12:02.161161       1 config.go:101] "Starting endpoint slice config controller"
	I0429 20:12:02.161165       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 20:12:02.164562       1 config.go:319] "Starting node config controller"
	I0429 20:12:02.164603       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 20:12:02.261288       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 20:12:02.261358       1 shared_informer.go:320] Caches are synced for service config
	I0429 20:12:02.265037       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [033c21bf724950eb59ec37c01840cbebc97390462ad40103725deafe34097f6b] <==
	W0429 20:11:44.112830       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 20:11:44.112991       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 20:11:44.152241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 20:11:44.153776       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 20:11:44.162873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0429 20:11:44.163289       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0429 20:11:44.280078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0429 20:11:44.280175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0429 20:11:44.422447       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 20:11:44.423557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 20:11:44.473712       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 20:11:44.473949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 20:11:44.474109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 20:11:44.474360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 20:11:44.499225       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0429 20:11:44.499409       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0429 20:11:44.529610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 20:11:44.530194       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 20:11:44.657814       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 20:11:44.657963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 20:11:44.687682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0429 20:11:44.687884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0429 20:11:44.704566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0429 20:11:44.705145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0429 20:11:46.768678       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 20:23:46 embed-certs-161370 kubelet[3912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:23:46 embed-certs-161370 kubelet[3912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:23:46 embed-certs-161370 kubelet[3912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:23:56 embed-certs-161370 kubelet[3912]: E0429 20:23:56.216153    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:24:07 embed-certs-161370 kubelet[3912]: E0429 20:24:07.215647    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:24:19 embed-certs-161370 kubelet[3912]: E0429 20:24:19.217104    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:24:33 embed-certs-161370 kubelet[3912]: E0429 20:24:33.216819    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:24:46 embed-certs-161370 kubelet[3912]: E0429 20:24:46.235635    3912 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:24:46 embed-certs-161370 kubelet[3912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:24:46 embed-certs-161370 kubelet[3912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:24:46 embed-certs-161370 kubelet[3912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:24:46 embed-certs-161370 kubelet[3912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:24:47 embed-certs-161370 kubelet[3912]: E0429 20:24:47.217089    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:24:59 embed-certs-161370 kubelet[3912]: E0429 20:24:59.216325    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:25:13 embed-certs-161370 kubelet[3912]: E0429 20:25:13.216060    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:25:24 embed-certs-161370 kubelet[3912]: E0429 20:25:24.218710    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:25:36 embed-certs-161370 kubelet[3912]: E0429 20:25:36.219220    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:25:46 embed-certs-161370 kubelet[3912]: E0429 20:25:46.236289    3912 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 20:25:46 embed-certs-161370 kubelet[3912]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 20:25:46 embed-certs-161370 kubelet[3912]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 20:25:46 embed-certs-161370 kubelet[3912]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 20:25:46 embed-certs-161370 kubelet[3912]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 20:25:48 embed-certs-161370 kubelet[3912]: E0429 20:25:48.215455    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:26:01 embed-certs-161370 kubelet[3912]: E0429 20:26:01.215988    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	Apr 29 20:26:12 embed-certs-161370 kubelet[3912]: E0429 20:26:12.218237    3912 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-x2wb6" podUID="cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3"
	
	
	==> storage-provisioner [d4c99c955ac14fd43f2860e60f90fbf6dc91c1a2bbbc6b25a4d5172dd64b414c] <==
	I0429 20:12:01.553195       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 20:12:01.611976       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 20:12:01.612052       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 20:12:01.659691       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 20:12:01.663401       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-161370_9284d8e6-3cb5-4ea7-941d-d82a438201d0!
	I0429 20:12:01.686296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2aacdc58-addd-4906-9ef8-55619688bc13", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-161370_9284d8e6-3cb5-4ea7-941d-d82a438201d0 became leader
	I0429 20:12:01.764535       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-161370_9284d8e6-3cb5-4ea7-941d-d82a438201d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-161370 -n embed-certs-161370
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-161370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-x2wb6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-161370 describe pod metrics-server-569cc877fc-x2wb6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-161370 describe pod metrics-server-569cc877fc-x2wb6: exit status 1 (90.272519ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-x2wb6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-161370 describe pod metrics-server-569cc877fc-x2wb6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (312.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (149.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
E0429 20:24:00.894185   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-919612 -n old-k8s-version-919612
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 2 (251.476058ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-919612" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-919612 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-919612 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.824µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-919612 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 2 (244.153939ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-919612 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-919612 logs -n 25: (1.652776487s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:55 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| ssh     | cert-options-437743 ssh                                | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-437743 -- sudo                         | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-437743                                 | cert-options-437743          | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:56 UTC |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:56 UTC | 29 Apr 24 19:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-161370            | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-456788             | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-509508                              | cert-expiration-509508       | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	| delete  | -p                                                     | disable-driver-mounts-193781 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 19:58 UTC |
	|         | disable-driver-mounts-193781                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 19:58 UTC | 29 Apr 24 20:00 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-866143  | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-161370                 | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-919612        | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-161370                                  | embed-certs-161370           | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC | 29 Apr 24 20:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-456788                  | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:00 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-456788                                   | no-preload-456788            | jenkins | v1.33.0 | 29 Apr 24 20:01 UTC | 29 Apr 24 20:11 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-919612             | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-919612                              | old-k8s-version-919612       | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-866143       | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-866143 | jenkins | v1.33.0 | 29 Apr 24 20:02 UTC | 29 Apr 24 20:10 UTC |
	|         | default-k8s-diff-port-866143                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 20:02:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 20:02:45.502823   66875 out.go:291] Setting OutFile to fd 1 ...
	I0429 20:02:45.503073   66875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:45.503084   66875 out.go:304] Setting ErrFile to fd 2...
	I0429 20:02:45.503089   66875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 20:02:45.503272   66875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 20:02:45.503808   66875 out.go:298] Setting JSON to false
	I0429 20:02:45.504681   66875 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6263,"bootTime":1714414702,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 20:02:45.504736   66875 start.go:139] virtualization: kvm guest
	I0429 20:02:45.507344   66875 out.go:177] * [default-k8s-diff-port-866143] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 20:02:45.508715   66875 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 20:02:45.508745   66875 notify.go:220] Checking for updates...
	I0429 20:02:45.510093   66875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 20:02:45.512200   66875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:02:45.513622   66875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 20:02:45.514915   66875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 20:02:45.516228   66875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 20:02:45.517923   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:02:45.518366   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:45.518446   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:45.533484   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I0429 20:02:45.533901   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:45.534427   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:02:45.534448   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:45.534822   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:45.535013   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:02:45.535292   66875 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 20:02:45.535595   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:02:45.535639   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:02:45.551065   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44003
	I0429 20:02:45.551469   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:02:45.551906   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:02:45.551928   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:02:45.552239   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:02:45.552451   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:02:45.584714   66875 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 20:02:45.586089   66875 start.go:297] selected driver: kvm2
	I0429 20:02:45.586117   66875 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:45.586250   66875 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 20:02:45.587043   66875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:45.587136   66875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 20:02:45.601799   66875 install.go:137] /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.0
	I0429 20:02:45.602171   66875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:02:45.602246   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:02:45.602265   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:02:45.602323   66875 start.go:340] cluster config:
	{Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:02:45.602444   66875 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 20:02:45.605081   66875 out.go:177] * Starting "default-k8s-diff-port-866143" primary control-plane node in "default-k8s-diff-port-866143" cluster
	I0429 20:02:42.794291   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:45.866333   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:45.606536   66875 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:02:45.606590   66875 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 20:02:45.606602   66875 cache.go:56] Caching tarball of preloaded images
	I0429 20:02:45.606687   66875 preload.go:173] Found /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0429 20:02:45.606704   66875 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0429 20:02:45.606799   66875 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:02:45.606986   66875 start.go:360] acquireMachinesLock for default-k8s-diff-port-866143: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:02:51.946332   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:02:55.018269   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:01.098329   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:04.170389   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:10.250316   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:13.322292   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:19.402290   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:22.474356   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:28.554348   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:31.626416   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:37.706282   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:40.778321   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:46.858318   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:49.930321   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:56.010331   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:03:59.082336   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:05.162299   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:08.234328   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:14.314352   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:17.386337   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:23.466350   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:26.538284   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:32.618297   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:35.690319   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:41.770372   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:44.842280   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:50.922320   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:04:53.994336   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:00.074389   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:03.146353   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:09.226369   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:12.298407   65980 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.184:22: connect: no route to host
	I0429 20:05:15.302828   66218 start.go:364] duration metric: took 4m7.483402316s to acquireMachinesLock for "no-preload-456788"
	I0429 20:05:15.302889   66218 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:15.302896   66218 fix.go:54] fixHost starting: 
	I0429 20:05:15.303301   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:15.303337   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:15.319582   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I0429 20:05:15.320057   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:15.320597   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:05:15.320620   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:15.321017   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:15.321272   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:15.321472   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:05:15.323137   66218 fix.go:112] recreateIfNeeded on no-preload-456788: state=Stopped err=<nil>
	I0429 20:05:15.323171   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	W0429 20:05:15.323346   66218 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:15.325520   66218 out.go:177] * Restarting existing kvm2 VM for "no-preload-456788" ...
	I0429 20:05:15.327122   66218 main.go:141] libmachine: (no-preload-456788) Calling .Start
	I0429 20:05:15.327314   66218 main.go:141] libmachine: (no-preload-456788) Ensuring networks are active...
	I0429 20:05:15.328136   66218 main.go:141] libmachine: (no-preload-456788) Ensuring network default is active
	I0429 20:05:15.328437   66218 main.go:141] libmachine: (no-preload-456788) Ensuring network mk-no-preload-456788 is active
	I0429 20:05:15.328771   66218 main.go:141] libmachine: (no-preload-456788) Getting domain xml...
	I0429 20:05:15.329442   66218 main.go:141] libmachine: (no-preload-456788) Creating domain...
	I0429 20:05:16.534970   66218 main.go:141] libmachine: (no-preload-456788) Waiting to get IP...
	I0429 20:05:16.536019   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:16.536375   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:16.536444   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:16.536369   67416 retry.go:31] will retry after 240.743093ms: waiting for machine to come up
	I0429 20:05:16.779123   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:16.779623   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:16.779659   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:16.779558   67416 retry.go:31] will retry after 355.595109ms: waiting for machine to come up
	I0429 20:05:17.137145   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:17.137512   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:17.137542   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:17.137480   67416 retry.go:31] will retry after 347.905643ms: waiting for machine to come up
	I0429 20:05:17.487174   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:17.487566   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:17.487597   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:17.487543   67416 retry.go:31] will retry after 547.016094ms: waiting for machine to come up
	I0429 20:05:15.300221   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:15.300278   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:05:15.300613   65980 buildroot.go:166] provisioning hostname "embed-certs-161370"
	I0429 20:05:15.300652   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:05:15.300910   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:05:15.302677   65980 machine.go:97] duration metric: took 4m37.41104152s to provisionDockerMachine
	I0429 20:05:15.302722   65980 fix.go:56] duration metric: took 4m37.432092484s for fixHost
	I0429 20:05:15.302728   65980 start.go:83] releasing machines lock for "embed-certs-161370", held for 4m37.432113341s
	W0429 20:05:15.302753   65980 start.go:713] error starting host: provision: host is not running
	W0429 20:05:15.302871   65980 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0429 20:05:15.302882   65980 start.go:728] Will try again in 5 seconds ...
	I0429 20:05:18.036617   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:18.037042   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:18.037104   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:18.037025   67416 retry.go:31] will retry after 465.100134ms: waiting for machine to come up
	I0429 20:05:18.503846   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:18.504326   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:18.504352   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:18.504283   67416 retry.go:31] will retry after 672.007195ms: waiting for machine to come up
	I0429 20:05:19.178173   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:19.178570   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:19.178604   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:19.178516   67416 retry.go:31] will retry after 744.052058ms: waiting for machine to come up
	I0429 20:05:19.924561   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:19.925029   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:19.925060   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:19.925002   67416 retry.go:31] will retry after 1.06511003s: waiting for machine to come up
	I0429 20:05:20.991584   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:20.992015   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:20.992046   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:20.991980   67416 retry.go:31] will retry after 1.677065765s: waiting for machine to come up
	I0429 20:05:22.671760   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:22.672123   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:22.672149   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:22.672085   67416 retry.go:31] will retry after 1.979191189s: waiting for machine to come up
	I0429 20:05:20.303964   65980 start.go:360] acquireMachinesLock for embed-certs-161370: {Name:mkc5138674898e266e3a150644dea789e7bb6641 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 20:05:24.654246   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:24.654711   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:24.654735   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:24.654663   67416 retry.go:31] will retry after 1.839551716s: waiting for machine to come up
	I0429 20:05:26.496511   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:26.496982   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:26.497017   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:26.496939   67416 retry.go:31] will retry after 3.505979368s: waiting for machine to come up
	I0429 20:05:30.006590   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:30.006916   66218 main.go:141] libmachine: (no-preload-456788) DBG | unable to find current IP address of domain no-preload-456788 in network mk-no-preload-456788
	I0429 20:05:30.006951   66218 main.go:141] libmachine: (no-preload-456788) DBG | I0429 20:05:30.006871   67416 retry.go:31] will retry after 3.811785899s: waiting for machine to come up
	I0429 20:05:35.155600   66615 start.go:364] duration metric: took 3m25.093405289s to acquireMachinesLock for "old-k8s-version-919612"
	I0429 20:05:35.155655   66615 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:35.155661   66615 fix.go:54] fixHost starting: 
	I0429 20:05:35.155999   66615 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:35.156034   66615 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:35.173332   66615 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I0429 20:05:35.173754   66615 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:35.174261   66615 main.go:141] libmachine: Using API Version  1
	I0429 20:05:35.174294   66615 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:35.174602   66615 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:35.174797   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:35.174987   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetState
	I0429 20:05:35.176453   66615 fix.go:112] recreateIfNeeded on old-k8s-version-919612: state=Stopped err=<nil>
	I0429 20:05:35.176478   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	W0429 20:05:35.176647   66615 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:35.178966   66615 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-919612" ...
	I0429 20:05:33.823293   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.823787   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has current primary IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.823806   66218 main.go:141] libmachine: (no-preload-456788) Found IP for machine: 192.168.39.235
	I0429 20:05:33.823830   66218 main.go:141] libmachine: (no-preload-456788) Reserving static IP address...
	I0429 20:05:33.824243   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "no-preload-456788", mac: "52:54:00:15:ae:18", ip: "192.168.39.235"} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.824279   66218 main.go:141] libmachine: (no-preload-456788) DBG | skip adding static IP to network mk-no-preload-456788 - found existing host DHCP lease matching {name: "no-preload-456788", mac: "52:54:00:15:ae:18", ip: "192.168.39.235"}
	I0429 20:05:33.824293   66218 main.go:141] libmachine: (no-preload-456788) Reserved static IP address: 192.168.39.235
	I0429 20:05:33.824308   66218 main.go:141] libmachine: (no-preload-456788) Waiting for SSH to be available...
	I0429 20:05:33.824323   66218 main.go:141] libmachine: (no-preload-456788) DBG | Getting to WaitForSSH function...
	I0429 20:05:33.826371   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.826678   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.826711   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.826808   66218 main.go:141] libmachine: (no-preload-456788) DBG | Using SSH client type: external
	I0429 20:05:33.826836   66218 main.go:141] libmachine: (no-preload-456788) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa (-rw-------)
	I0429 20:05:33.826863   66218 main.go:141] libmachine: (no-preload-456788) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.235 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:33.826876   66218 main.go:141] libmachine: (no-preload-456788) DBG | About to run SSH command:
	I0429 20:05:33.826887   66218 main.go:141] libmachine: (no-preload-456788) DBG | exit 0
	I0429 20:05:33.954275   66218 main.go:141] libmachine: (no-preload-456788) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:33.954631   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetConfigRaw
	I0429 20:05:33.955387   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:33.957827   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.958210   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.958241   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.958510   66218 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/config.json ...
	I0429 20:05:33.958707   66218 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:33.958726   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:33.958952   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:33.961236   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.961535   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:33.961564   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:33.961692   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:33.961857   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:33.962015   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:33.962163   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:33.962339   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:33.962522   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:33.962533   66218 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:34.070746   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:34.070777   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.071037   66218 buildroot.go:166] provisioning hostname "no-preload-456788"
	I0429 20:05:34.071062   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.071305   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.073680   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.074016   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.074043   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.074203   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.074374   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.074513   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.074612   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.074743   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.074946   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.074960   66218 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-456788 && echo "no-preload-456788" | sudo tee /etc/hostname
	I0429 20:05:34.198256   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-456788
	
	I0429 20:05:34.198286   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.201126   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.201482   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.201521   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.201710   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.201914   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.202055   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.202219   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.202361   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.202549   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.202573   66218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-456788' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-456788/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-456788' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:34.324678   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:34.324710   66218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:34.324732   66218 buildroot.go:174] setting up certificates
	I0429 20:05:34.324744   66218 provision.go:84] configureAuth start
	I0429 20:05:34.324756   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetMachineName
	I0429 20:05:34.325032   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:34.327623   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.328010   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.328040   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.328149   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.330359   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.330679   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.330711   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.330811   66218 provision.go:143] copyHostCerts
	I0429 20:05:34.330865   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:34.330878   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:34.330939   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:34.331023   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:34.331031   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:34.331054   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:34.331111   66218 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:34.331119   66218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:34.331148   66218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:34.331231   66218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.no-preload-456788 san=[127.0.0.1 192.168.39.235 localhost minikube no-preload-456788]
	I0429 20:05:34.444358   66218 provision.go:177] copyRemoteCerts
	I0429 20:05:34.444420   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:34.444445   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.447129   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.447432   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.447466   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.447623   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.447833   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.447999   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.448129   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:34.533465   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:34.561724   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:34.589229   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0429 20:05:34.617451   66218 provision.go:87] duration metric: took 292.691614ms to configureAuth
	I0429 20:05:34.617491   66218 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:34.617733   66218 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:05:34.617821   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.620628   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.621016   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.621047   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.621257   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.621532   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.621718   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.621892   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.622085   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:34.622289   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:34.622305   66218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:34.908031   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:34.908064   66218 machine.go:97] duration metric: took 949.343369ms to provisionDockerMachine
	I0429 20:05:34.908077   66218 start.go:293] postStartSetup for "no-preload-456788" (driver="kvm2")
	I0429 20:05:34.908091   66218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:34.908107   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:34.908452   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:34.908489   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:34.911574   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.912026   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:34.912054   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:34.912219   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:34.912428   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:34.912616   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:34.912743   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:34.997625   66218 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:35.002661   66218 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:35.002687   66218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:35.002753   66218 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:35.002822   66218 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:35.002906   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:35.013292   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:35.039830   66218 start.go:296] duration metric: took 131.741312ms for postStartSetup
	I0429 20:05:35.039865   66218 fix.go:56] duration metric: took 19.736969384s for fixHost
	I0429 20:05:35.039905   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.042526   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.042877   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.042912   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.043032   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.043239   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.043416   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.043534   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.043696   66218 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:35.043848   66218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.235 22 <nil> <nil>}
	I0429 20:05:35.043858   66218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:05:35.155463   66218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421135.123583649
	
	I0429 20:05:35.155485   66218 fix.go:216] guest clock: 1714421135.123583649
	I0429 20:05:35.155496   66218 fix.go:229] Guest: 2024-04-29 20:05:35.123583649 +0000 UTC Remote: 2024-04-29 20:05:35.039869068 +0000 UTC m=+267.371683880 (delta=83.714581ms)
	I0429 20:05:35.155514   66218 fix.go:200] guest clock delta is within tolerance: 83.714581ms
	I0429 20:05:35.155519   66218 start.go:83] releasing machines lock for "no-preload-456788", held for 19.852645936s
	I0429 20:05:35.155544   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.155881   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:35.158682   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.159051   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.159070   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.159205   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.159793   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.159987   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:05:35.160077   66218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:35.160117   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.160216   66218 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:35.160244   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:05:35.162788   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163016   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163226   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.163250   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163372   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.163449   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:35.163475   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:35.163537   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.163621   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:05:35.163723   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.163791   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:05:35.163873   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:35.163920   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:05:35.164064   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:05:35.248518   66218 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:35.271479   66218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:35.423324   66218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:35.430371   66218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:35.430445   66218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:35.447860   66218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:35.447886   66218 start.go:494] detecting cgroup driver to use...
	I0429 20:05:35.447949   66218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:35.464102   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:35.479069   66218 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:35.479158   66218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:35.493800   66218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:35.509284   66218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:35.627273   66218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:35.785213   66218 docker.go:233] disabling docker service ...
	I0429 20:05:35.785300   66218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:35.803584   66218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:35.818874   66218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:35.984309   66218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:36.128841   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:36.148237   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:36.172144   66218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:05:36.172243   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.191274   66218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:36.191353   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.209656   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.224474   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.238802   66218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:36.252515   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.264522   66218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.286496   66218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:36.299127   66218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:36.310702   66218 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:36.310760   66218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:36.336226   66218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:36.348617   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:36.474875   66218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:36.619181   66218 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:36.619257   66218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:36.625401   66218 start.go:562] Will wait 60s for crictl version
	I0429 20:05:36.625475   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:36.630232   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:36.667005   66218 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:36.667093   66218 ssh_runner.go:195] Run: crio --version
	I0429 20:05:36.699758   66218 ssh_runner.go:195] Run: crio --version
	I0429 20:05:36.734406   66218 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:05:36.735853   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetIP
	I0429 20:05:36.738683   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:36.739019   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:05:36.739049   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:05:36.739310   66218 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:36.745227   66218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:36.760124   66218 kubeadm.go:877] updating cluster {Name:no-preload-456788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:36.760238   66218 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:05:36.760278   66218 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:36.801389   66218 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:05:36.801414   66218 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:05:36.801470   66218 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:36.801508   66218 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:36.801524   66218 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:36.801559   66218 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:36.801580   66218 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.801632   66218 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0429 20:05:36.801687   66218 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:36.801688   66218 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:36.803301   66218 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:36.803300   66218 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0429 20:05:36.803275   66218 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.803308   66218 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:36.803382   66218 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:36.956976   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:36.964957   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.022376   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.025860   66218 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0429 20:05:37.025893   66218 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0429 20:05:37.025915   66218 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.025924   66218 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:37.025962   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.025964   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.072629   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0429 20:05:37.072688   66218 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0429 20:05:37.072713   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0429 20:05:37.072741   66218 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.072791   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:37.118610   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0429 20:05:37.118704   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0429 20:05:37.118720   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.128364   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0429 20:05:37.128474   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:37.161350   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0429 20:05:37.165670   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0429 20:05:37.165693   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0429 20:05:37.165710   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.165754   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0429 20:05:37.165762   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0429 20:05:37.165779   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:37.167440   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:37.174173   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:37.180560   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:37.715733   66218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:35.180393   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .Start
	I0429 20:05:35.180576   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring networks are active...
	I0429 20:05:35.181281   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network default is active
	I0429 20:05:35.181678   66615 main.go:141] libmachine: (old-k8s-version-919612) Ensuring network mk-old-k8s-version-919612 is active
	I0429 20:05:35.182102   66615 main.go:141] libmachine: (old-k8s-version-919612) Getting domain xml...
	I0429 20:05:35.182867   66615 main.go:141] libmachine: (old-k8s-version-919612) Creating domain...
	I0429 20:05:36.459478   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting to get IP...
	I0429 20:05:36.460301   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.460751   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.460817   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.460706   67552 retry.go:31] will retry after 280.48781ms: waiting for machine to come up
	I0429 20:05:36.743188   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:36.743630   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:36.743658   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:36.743591   67552 retry.go:31] will retry after 326.238132ms: waiting for machine to come up
	I0429 20:05:37.071146   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.071576   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.071609   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.071527   67552 retry.go:31] will retry after 380.72234ms: waiting for machine to come up
	I0429 20:05:37.453967   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:37.454435   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:37.454464   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:37.454385   67552 retry.go:31] will retry after 593.303053ms: waiting for machine to come up
	I0429 20:05:38.049072   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.049555   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.049587   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.049500   67552 retry.go:31] will retry after 694.752524ms: waiting for machine to come up
	I0429 20:05:38.746542   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:38.747034   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:38.747065   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:38.747002   67552 retry.go:31] will retry after 860.161186ms: waiting for machine to come up
	I0429 20:05:39.609098   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:39.609601   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:39.609634   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:39.609544   67552 retry.go:31] will retry after 726.889681ms: waiting for machine to come up
	I0429 20:05:39.327634   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.161845487s)
	I0429 20:05:39.327673   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.161870572s)
	I0429 20:05:39.327710   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0429 20:05:39.327675   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0429 20:05:39.327737   66218 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:39.327748   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0: (2.16027023s)
	I0429 20:05:39.327805   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0429 20:05:39.327811   66218 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0429 20:05:39.327821   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0: (2.153617598s)
	I0429 20:05:39.327846   66218 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:39.327878   66218 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0429 20:05:39.327891   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0: (2.147303278s)
	I0429 20:05:39.327910   66218 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:39.327929   66218 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0429 20:05:39.327944   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327954   66218 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.612190652s)
	I0429 20:05:39.327960   66218 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:39.327984   66218 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0429 20:05:39.328035   66218 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:39.328061   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327991   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.327886   66218 ssh_runner.go:195] Run: which crictl
	I0429 20:05:39.333555   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0429 20:05:39.343257   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:05:41.263038   66218 ssh_runner.go:235] Completed: which crictl: (1.934889703s)
	I0429 20:05:41.263103   66218 ssh_runner.go:235] Completed: which crictl: (1.93491368s)
	I0429 20:05:41.263121   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0429 20:05:41.263132   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.935299869s)
	I0429 20:05:41.263153   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0: (1.929577799s)
	I0429 20:05:41.263155   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0429 20:05:41.263217   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.919934007s)
	I0429 20:05:41.263221   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0429 20:05:41.263248   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:41.263251   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0429 20:05:41.263290   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0429 20:05:41.263301   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:41.263343   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:41.263159   66218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0429 20:05:40.338292   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:40.338823   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:40.338864   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:40.338757   67552 retry.go:31] will retry after 1.310400969s: waiting for machine to come up
	I0429 20:05:41.651107   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:41.651625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:41.651670   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:41.651575   67552 retry.go:31] will retry after 1.769756679s: waiting for machine to come up
	I0429 20:05:43.423326   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:43.423829   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:43.423869   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:43.423790   67552 retry.go:31] will retry after 1.748237944s: waiting for machine to come up
	I0429 20:05:44.084051   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.820737476s)
	I0429 20:05:44.084139   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.820774517s)
	I0429 20:05:44.084167   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.820842646s)
	I0429 20:05:44.084186   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0429 20:05:44.084142   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0429 20:05:44.084202   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0429 20:05:44.084211   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:44.084065   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0: (2.820919138s)
	I0429 20:05:44.084244   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0429 20:05:44.084260   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0429 20:05:44.084272   66218 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0: (2.82086612s)
	I0429 20:05:44.084305   66218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0429 20:05:44.084331   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:44.084375   66218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:44.091151   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0429 20:05:46.553783   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.469493694s)
	I0429 20:05:46.553882   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0429 20:05:46.553912   66218 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:46.553837   66218 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: (2.469479626s)
	I0429 20:05:46.553973   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0429 20:05:46.553975   66218 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0429 20:05:47.510118   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0429 20:05:47.510169   66218 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:47.510212   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0429 20:05:45.173157   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:45.173617   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:45.173642   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:45.173563   67552 retry.go:31] will retry after 2.784243469s: waiting for machine to come up
	I0429 20:05:47.959942   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:47.960473   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:47.960508   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:47.960410   67552 retry.go:31] will retry after 3.046526969s: waiting for machine to come up
	I0429 20:05:49.069163   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.55892426s)
	I0429 20:05:49.069202   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0429 20:05:49.069231   66218 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:49.069276   66218 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0429 20:05:51.007941   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:51.008230   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | unable to find current IP address of domain old-k8s-version-919612 in network mk-old-k8s-version-919612
	I0429 20:05:51.008253   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | I0429 20:05:51.008213   67552 retry.go:31] will retry after 4.220985004s: waiting for machine to come up
	I0429 20:05:56.579154   66875 start.go:364] duration metric: took 3m10.972135355s to acquireMachinesLock for "default-k8s-diff-port-866143"
	I0429 20:05:56.579208   66875 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:05:56.579230   66875 fix.go:54] fixHost starting: 
	I0429 20:05:56.579615   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:05:56.579655   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:05:56.599113   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0429 20:05:56.599627   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:05:56.600173   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:05:56.600198   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:05:56.600488   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:05:56.600694   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:05:56.600849   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:05:56.602291   66875 fix.go:112] recreateIfNeeded on default-k8s-diff-port-866143: state=Stopped err=<nil>
	I0429 20:05:56.602315   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	W0429 20:05:56.602456   66875 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:05:56.605006   66875 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-866143" ...
	I0429 20:05:53.062693   66218 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.993382111s)
	I0429 20:05:53.062730   66218 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0429 20:05:53.062757   66218 cache_images.go:123] Successfully loaded all cached images
	I0429 20:05:53.062762   66218 cache_images.go:92] duration metric: took 16.261337424s to LoadCachedImages
	I0429 20:05:53.062770   66218 kubeadm.go:928] updating node { 192.168.39.235 8443 v1.30.0 crio true true} ...
	I0429 20:05:53.062893   66218 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-456788 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:05:53.062994   66218 ssh_runner.go:195] Run: crio config
	I0429 20:05:53.116289   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:05:53.116311   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:05:53.116322   66218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:05:53.116340   66218 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.235 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-456788 NodeName:no-preload-456788 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:05:53.116516   66218 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-456788"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:05:53.116592   66218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:05:53.128095   66218 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:05:53.128174   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:05:53.138786   66218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0429 20:05:53.158151   66218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:05:53.176440   66218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0429 20:05:53.195348   66218 ssh_runner.go:195] Run: grep 192.168.39.235	control-plane.minikube.internal$ /etc/hosts
	I0429 20:05:53.199408   66218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.235	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:53.212407   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:53.349752   66218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:05:53.368381   66218 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788 for IP: 192.168.39.235
	I0429 20:05:53.368401   66218 certs.go:194] generating shared ca certs ...
	I0429 20:05:53.368415   66218 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:05:53.368565   66218 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:05:53.368609   66218 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:05:53.368619   66218 certs.go:256] generating profile certs ...
	I0429 20:05:53.368697   66218 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.key
	I0429 20:05:53.368751   66218 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.key.5f45c78c
	I0429 20:05:53.368785   66218 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.key
	I0429 20:05:53.368889   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:05:53.368915   66218 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:05:53.368921   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:05:53.368944   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:05:53.368972   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:05:53.368993   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:05:53.369029   66218 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:53.369624   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:05:53.428403   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:05:53.467050   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:05:53.501319   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:05:53.528828   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 20:05:53.553742   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:05:53.582308   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:05:53.609324   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:05:53.636730   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:05:53.663388   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:05:53.690949   66218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:05:53.717113   66218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:05:53.735784   66218 ssh_runner.go:195] Run: openssl version
	I0429 20:05:53.741879   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:05:53.752930   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.757811   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.757861   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:05:53.763798   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:05:53.775019   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:05:53.786654   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.791457   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.791500   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:05:53.797608   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:05:53.809139   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:05:53.820927   66218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.826384   66218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.826441   66218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:05:53.832798   66218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:05:53.844300   66218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:05:53.849139   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:05:53.855556   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:05:53.861716   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:05:53.868390   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:05:53.874740   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:05:53.881101   66218 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:05:53.887688   66218 kubeadm.go:391] StartCluster: {Name:no-preload-456788 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-456788 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:05:53.887807   66218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:05:53.887858   66218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:05:53.930491   66218 cri.go:89] found id: ""
	I0429 20:05:53.930563   66218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:05:53.941016   66218 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:05:53.941037   66218 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:05:53.941042   66218 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:05:53.941081   66218 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:05:53.950651   66218 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:05:53.951536   66218 kubeconfig.go:125] found "no-preload-456788" server: "https://192.168.39.235:8443"
	I0429 20:05:53.953451   66218 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:05:53.962857   66218 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.235
	I0429 20:05:53.962879   66218 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:05:53.962889   66218 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:05:53.962932   66218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:05:54.000841   66218 cri.go:89] found id: ""
	I0429 20:05:54.000909   66218 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:05:54.018221   66218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:05:54.028524   66218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:05:54.028556   66218 kubeadm.go:156] found existing configuration files:
	
	I0429 20:05:54.028600   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:05:54.038717   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:05:54.038807   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:05:54.049350   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:05:54.059483   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:05:54.059548   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:05:54.069518   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:05:54.078900   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:05:54.078953   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:05:54.088652   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:05:54.098545   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:05:54.098596   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:05:54.108351   66218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:05:54.118645   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:54.236330   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:55.859211   66218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.622843221s)
	I0429 20:05:55.859254   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.075993   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.175176   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:05:56.274249   66218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:05:56.274469   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:56.775315   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:57.274840   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:05:57.315656   66218 api_server.go:72] duration metric: took 1.041421989s to wait for apiserver process to appear ...
	I0429 20:05:57.315697   66218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:05:57.315719   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:05:57.316669   66218 api_server.go:269] stopped: https://192.168.39.235:8443/healthz: Get "https://192.168.39.235:8443/healthz": dial tcp 192.168.39.235:8443: connect: connection refused
	I0429 20:05:55.230409   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230860   66615 main.go:141] libmachine: (old-k8s-version-919612) Found IP for machine: 192.168.72.240
	I0429 20:05:55.230889   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has current primary IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.230898   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserving static IP address...
	I0429 20:05:55.231252   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.231287   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | skip adding static IP to network mk-old-k8s-version-919612 - found existing host DHCP lease matching {name: "old-k8s-version-919612", mac: "52:54:00:62:23:ed", ip: "192.168.72.240"}
	I0429 20:05:55.231305   66615 main.go:141] libmachine: (old-k8s-version-919612) Reserved static IP address: 192.168.72.240
	I0429 20:05:55.231319   66615 main.go:141] libmachine: (old-k8s-version-919612) Waiting for SSH to be available...
	I0429 20:05:55.231335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Getting to WaitForSSH function...
	I0429 20:05:55.233198   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.233500   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.233625   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH client type: external
	I0429 20:05:55.233671   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa (-rw-------)
	I0429 20:05:55.233706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:05:55.233730   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | About to run SSH command:
	I0429 20:05:55.233747   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | exit 0
	I0429 20:05:55.354242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | SSH cmd err, output: <nil>: 
	I0429 20:05:55.354584   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetConfigRaw
	I0429 20:05:55.355221   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.357791   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358242   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.358276   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.358564   66615 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/config.json ...
	I0429 20:05:55.358786   66615 machine.go:94] provisionDockerMachine start ...
	I0429 20:05:55.358807   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:55.359037   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.361536   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.361861   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.361885   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.362048   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.362247   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362416   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.362568   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.362733   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.362930   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.362943   66615 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:05:55.462364   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:05:55.462388   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462632   66615 buildroot.go:166] provisioning hostname "old-k8s-version-919612"
	I0429 20:05:55.462669   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.462852   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.465335   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465674   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.465706   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.465836   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.466034   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466208   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.466366   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.466525   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.466729   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.466745   66615 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-919612 && echo "old-k8s-version-919612" | sudo tee /etc/hostname
	I0429 20:05:55.596239   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-919612
	
	I0429 20:05:55.596281   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.599221   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599575   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.599606   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.599770   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.599970   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600122   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.600316   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.600498   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:55.600667   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:55.600690   66615 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-919612' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-919612/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-919612' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:05:55.716588   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:05:55.716621   66615 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:05:55.716647   66615 buildroot.go:174] setting up certificates
	I0429 20:05:55.716658   66615 provision.go:84] configureAuth start
	I0429 20:05:55.716671   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetMachineName
	I0429 20:05:55.716956   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:55.719569   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.719919   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.719956   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.720095   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.722484   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.722876   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.722912   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.723036   66615 provision.go:143] copyHostCerts
	I0429 20:05:55.723087   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:05:55.723097   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:05:55.723158   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:05:55.723253   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:05:55.723262   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:05:55.723280   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:05:55.723336   66615 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:05:55.723342   66615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:05:55.723358   66615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:05:55.723404   66615 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-919612 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-919612]
	I0429 20:05:55.878639   66615 provision.go:177] copyRemoteCerts
	I0429 20:05:55.878724   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:05:55.878750   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:55.881746   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882306   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:55.882358   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:55.882540   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:55.882743   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:55.882986   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:55.883139   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:55.973158   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:05:56.003094   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0429 20:05:56.031670   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:05:56.059049   66615 provision.go:87] duration metric: took 342.376371ms to configureAuth
	I0429 20:05:56.059091   66615 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:05:56.059335   66615 config.go:182] Loaded profile config "old-k8s-version-919612": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0429 20:05:56.059441   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.062416   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.062887   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.062921   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.063082   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.063322   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063521   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.063688   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.063901   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.064066   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.064082   66615 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:05:56.342484   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:05:56.342511   66615 machine.go:97] duration metric: took 983.711183ms to provisionDockerMachine
	I0429 20:05:56.342525   66615 start.go:293] postStartSetup for "old-k8s-version-919612" (driver="kvm2")
	I0429 20:05:56.342540   66615 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:05:56.342589   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.342931   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:05:56.342983   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.345399   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345710   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.345731   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.345869   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.346047   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.346233   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.346418   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.431189   66615 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:05:56.435878   66615 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:05:56.435903   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:05:56.435983   66615 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:05:56.436086   66615 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:05:56.436170   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:05:56.445841   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:05:56.472683   66615 start.go:296] duration metric: took 130.146591ms for postStartSetup
	I0429 20:05:56.472715   66615 fix.go:56] duration metric: took 21.31705375s for fixHost
	I0429 20:05:56.472736   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.475127   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475470   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.475492   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.475624   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.475857   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476055   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.476211   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.476378   66615 main.go:141] libmachine: Using SSH client type: native
	I0429 20:05:56.476536   66615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I0429 20:05:56.476547   66615 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:05:56.578999   66615 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421156.548872445
	
	I0429 20:05:56.579028   66615 fix.go:216] guest clock: 1714421156.548872445
	I0429 20:05:56.579040   66615 fix.go:229] Guest: 2024-04-29 20:05:56.548872445 +0000 UTC Remote: 2024-04-29 20:05:56.472718546 +0000 UTC m=+226.572342220 (delta=76.153899ms)
	I0429 20:05:56.579068   66615 fix.go:200] guest clock delta is within tolerance: 76.153899ms
	I0429 20:05:56.579076   66615 start.go:83] releasing machines lock for "old-k8s-version-919612", held for 21.423436193s
	I0429 20:05:56.579111   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.579407   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:56.582338   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582673   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.582711   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.582856   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583365   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583543   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .DriverName
	I0429 20:05:56.583625   66615 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:05:56.583667   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.583765   66615 ssh_runner.go:195] Run: cat /version.json
	I0429 20:05:56.583805   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHHostname
	I0429 20:05:56.586263   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586552   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586618   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586656   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.586891   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.586953   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:56.586989   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:56.587060   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587170   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHPort
	I0429 20:05:56.587240   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587310   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHKeyPath
	I0429 20:05:56.587458   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetSSHUsername
	I0429 20:05:56.587462   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.587600   66615 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/old-k8s-version-919612/id_rsa Username:docker}
	I0429 20:05:56.672678   66615 ssh_runner.go:195] Run: systemctl --version
	I0429 20:05:56.694175   66615 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:05:56.859009   66615 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:05:56.865723   66615 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:05:56.865798   66615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:05:56.885686   66615 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:05:56.885714   66615 start.go:494] detecting cgroup driver to use...
	I0429 20:05:56.885805   66615 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:05:56.909082   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:05:56.931583   66615 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:05:56.931646   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:05:56.953524   66615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:05:56.976170   66615 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:05:57.122813   66615 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:05:57.315725   66615 docker.go:233] disabling docker service ...
	I0429 20:05:57.315786   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:05:57.333927   66615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:05:57.350022   66615 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:05:57.525787   66615 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:05:57.685802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:05:57.703246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:05:57.730558   66615 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0429 20:05:57.730618   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.747081   66615 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:05:57.747133   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.760168   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.773553   66615 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:05:57.787609   66615 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:05:57.800532   66615 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:05:57.813582   66615 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:05:57.813669   66615 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:05:57.832224   66615 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:05:57.844783   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:05:57.991666   66615 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:05:58.183635   66615 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:05:58.183718   66615 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:05:58.189441   66615 start.go:562] Will wait 60s for crictl version
	I0429 20:05:58.189509   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:05:58.194049   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:05:58.250751   66615 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:05:58.250839   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.292368   66615 ssh_runner.go:195] Run: crio --version
	I0429 20:05:58.336121   66615 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0429 20:05:58.337389   66615 main.go:141] libmachine: (old-k8s-version-919612) Calling .GetIP
	I0429 20:05:58.340707   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341125   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:23:ed", ip: ""} in network mk-old-k8s-version-919612: {Iface:virbr4 ExpiryTime:2024-04-29 21:05:47 +0000 UTC Type:0 Mac:52:54:00:62:23:ed Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-919612 Clientid:01:52:54:00:62:23:ed}
	I0429 20:05:58.341153   66615 main.go:141] libmachine: (old-k8s-version-919612) DBG | domain old-k8s-version-919612 has defined IP address 192.168.72.240 and MAC address 52:54:00:62:23:ed in network mk-old-k8s-version-919612
	I0429 20:05:58.341387   66615 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0429 20:05:58.346434   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:05:58.361081   66615 kubeadm.go:877] updating cluster {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:05:58.361242   66615 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 20:05:58.361307   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:05:58.414304   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:05:58.414366   66615 ssh_runner.go:195] Run: which lz4
	I0429 20:05:58.420584   66615 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:05:58.425682   66615 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:05:58.425712   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0429 20:05:56.606748   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Start
	I0429 20:05:56.606929   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring networks are active...
	I0429 20:05:56.607627   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring network default is active
	I0429 20:05:56.608028   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Ensuring network mk-default-k8s-diff-port-866143 is active
	I0429 20:05:56.608557   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Getting domain xml...
	I0429 20:05:56.609325   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Creating domain...
	I0429 20:05:57.911657   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting to get IP...
	I0429 20:05:57.912705   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:57.913118   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:57.913211   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:57.913104   67743 retry.go:31] will retry after 298.590493ms: waiting for machine to come up
	I0429 20:05:58.213730   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.214424   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.214578   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:58.214487   67743 retry.go:31] will retry after 375.439886ms: waiting for machine to come up
	I0429 20:05:58.592145   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.592671   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:58.592700   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:58.592626   67743 retry.go:31] will retry after 432.890106ms: waiting for machine to come up
	I0429 20:05:59.027344   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.027782   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.027812   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:59.027732   67743 retry.go:31] will retry after 547.616894ms: waiting for machine to come up
	I0429 20:05:59.576555   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.577116   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:05:59.577140   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:05:59.577058   67743 retry.go:31] will retry after 662.088326ms: waiting for machine to come up
	I0429 20:06:00.240907   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.241712   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.241744   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:00.241667   67743 retry.go:31] will retry after 691.874394ms: waiting for machine to come up
	I0429 20:05:57.816218   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.079778   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:01.079817   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:01.079832   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.112008   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:01.112043   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:01.316358   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.322401   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:01.322437   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:01.815974   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:01.825156   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:01.825219   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:02.316473   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:02.328725   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:02.328763   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:02.816674   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:02.822826   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:02.822866   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:03.315863   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:03.323314   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:03.323366   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:03.816529   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:03.822521   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:03.822556   66218 api_server.go:103] status: https://192.168.39.235:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:04.316336   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:06:04.325750   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0429 20:06:04.337308   66218 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:04.337348   66218 api_server.go:131] duration metric: took 7.02164287s to wait for apiserver health ...
	I0429 20:06:04.337361   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:06:04.337370   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:04.505344   66218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:00.520217   66615 crio.go:462] duration metric: took 2.099664395s to copy over tarball
	I0429 20:06:00.520314   66615 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:04.082476   66615 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.562128598s)
	I0429 20:06:04.082527   66615 crio.go:469] duration metric: took 3.562271241s to extract the tarball
	I0429 20:06:04.082538   66615 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:04.129338   66615 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:04.177683   66615 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0429 20:06:04.177709   66615 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0429 20:06:04.177762   66615 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.177798   66615 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.177817   66615 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.177834   66615 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.177835   66615 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.177783   66615 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.177897   66615 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0429 20:06:04.177972   66615 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179282   66615 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.179360   66615 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.179361   66615 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:04.179320   66615 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.179331   66615 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.179299   66615 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.179333   66615 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0429 20:06:04.323997   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376145   66615 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0429 20:06:04.376210   66615 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.376261   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.381592   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0429 20:06:04.420565   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0429 20:06:04.440670   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0429 20:06:04.461763   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.499283   66615 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0429 20:06:04.499347   66615 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0429 20:06:04.499404   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513860   66615 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0429 20:06:04.513900   66615 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.513946   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.513988   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0429 20:06:04.548990   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.556713   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.556942   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0429 20:06:04.556965   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0429 20:06:04.566227   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.598982   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.656930   66615 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0429 20:06:04.656980   66615 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.657038   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.724922   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0429 20:06:04.725179   66615 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0429 20:06:04.725218   66615 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.725262   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732375   66615 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0429 20:06:04.732429   66615 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.732482   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.732492   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0429 20:06:04.732483   66615 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0429 20:06:04.732669   66615 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.732726   66615 ssh_runner.go:195] Run: which crictl
	I0429 20:06:04.735419   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0429 20:06:04.739785   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0429 20:06:04.742496   66615 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0429 20:06:04.834684   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0429 20:06:04.834754   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0429 20:06:04.834811   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0429 20:06:04.847076   66615 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0429 20:06:00.935382   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.935935   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:00.935979   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:00.935902   67743 retry.go:31] will retry after 1.024898519s: waiting for machine to come up
	I0429 20:06:01.962446   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:01.963109   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:01.963140   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:01.963059   67743 retry.go:31] will retry after 1.19225855s: waiting for machine to come up
	I0429 20:06:03.157257   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:03.157781   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:03.157843   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:03.157738   67743 retry.go:31] will retry after 1.699779549s: waiting for machine to come up
	I0429 20:06:04.859190   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:04.859622   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:04.859670   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:04.859565   67743 retry.go:31] will retry after 2.307475318s: waiting for machine to come up
	I0429 20:06:04.671477   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:04.684650   66218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:04.718146   66218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:04.908181   66218 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:04.908213   66218 system_pods.go:61] "coredns-7db6d8ff4d-d4kwk" [215ff4b8-3ae5-49a7-8a9f-6acb4d176b93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:04.908223   66218 system_pods.go:61] "etcd-no-preload-456788" [3ec7e177-1b68-4bff-aa4d-803f5346e1be] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:04.908231   66218 system_pods.go:61] "kube-apiserver-no-preload-456788" [5e8bf0b0-9669-4f0c-8da1-523589158b16] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:04.908236   66218 system_pods.go:61] "kube-controller-manager-no-preload-456788" [515363f7-bde1-4ba7-a5a9-6779f673afaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:04.908240   66218 system_pods.go:61] "kube-proxy-slnph" [29f503bf-ce19-425c-8174-2b8e7b27a424] Running
	I0429 20:06:04.908253   66218 system_pods.go:61] "kube-scheduler-no-preload-456788" [4f394af0-6452-49dd-9770-7c6bfcff3936] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:04.908258   66218 system_pods.go:61] "metrics-server-569cc877fc-6mpnm" [5f183615-a243-410a-a524-ebdaa65e6400] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:04.908262   66218 system_pods.go:61] "storage-provisioner" [f74a777d-a3d7-4682-bad0-44bb993a2d43] Running
	I0429 20:06:04.908270   66218 system_pods.go:74] duration metric: took 190.098153ms to wait for pod list to return data ...
	I0429 20:06:04.908278   66218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:05.212876   66218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:05.212913   66218 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:05.212929   66218 node_conditions.go:105] duration metric: took 304.645545ms to run NodePressure ...
	I0429 20:06:05.212950   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:05.913252   66218 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:05.928914   66218 kubeadm.go:733] kubelet initialised
	I0429 20:06:05.928947   66218 kubeadm.go:734] duration metric: took 15.668535ms waiting for restarted kubelet to initialise ...
	I0429 20:06:05.928957   66218 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:05.937357   66218 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:05.091766   66615 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:05.269730   66615 cache_images.go:92] duration metric: took 1.092006107s to LoadCachedImages
	W0429 20:06:05.269839   66615 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18774-7754/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0429 20:06:05.269857   66615 kubeadm.go:928] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I0429 20:06:05.269988   66615 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-919612 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:05.270088   66615 ssh_runner.go:195] Run: crio config
	I0429 20:06:05.322439   66615 cni.go:84] Creating CNI manager for ""
	I0429 20:06:05.322471   66615 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:05.322486   66615 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:05.322522   66615 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-919612 NodeName:old-k8s-version-919612 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0429 20:06:05.322746   66615 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-919612"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:05.322810   66615 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0429 20:06:05.340981   66615 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:05.341058   66615 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:05.357048   66615 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0429 20:06:05.384352   66615 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:05.407887   66615 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0429 20:06:05.431531   66615 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:05.437567   66615 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:05.457652   66615 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:05.610358   66615 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:05.641538   66615 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612 for IP: 192.168.72.240
	I0429 20:06:05.641568   66615 certs.go:194] generating shared ca certs ...
	I0429 20:06:05.641583   66615 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:05.641758   66615 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:05.641831   66615 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:05.641843   66615 certs.go:256] generating profile certs ...
	I0429 20:06:05.641948   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.key
	I0429 20:06:05.642020   66615 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key.5df5e618
	I0429 20:06:05.642083   66615 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key
	I0429 20:06:05.642256   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:05.642304   66615 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:05.642325   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:05.642364   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:05.642401   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:05.642435   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:05.642489   66615 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:05.643156   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:05.691350   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:05.734434   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:05.773056   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:05.819778   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0429 20:06:05.868256   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:05.911589   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:05.957714   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0429 20:06:06.002120   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:06.039736   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:06.079636   66615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:06.118317   66615 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:06.145932   66615 ssh_runner.go:195] Run: openssl version
	I0429 20:06:06.152970   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:06.166609   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.171939   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.172033   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:06.179153   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:06.193491   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:06.207800   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214803   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.214876   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:06.222154   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:06.236908   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:06.254197   66615 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260797   66615 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.260863   66615 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:06.267635   66615 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:06.282727   66615 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:06.289580   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:06.301014   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:06.310503   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:06.318708   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:06.325718   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:06.332690   66615 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:06.339914   66615 kubeadm.go:391] StartCluster: {Name:old-k8s-version-919612 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-919612 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:06.340012   66615 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:06.340069   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.391511   66615 cri.go:89] found id: ""
	I0429 20:06:06.391618   66615 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:06.408955   66615 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:06.408985   66615 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:06.408991   66615 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:06.409060   66615 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:06.425276   66615 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:06.426397   66615 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-919612" does not appear in /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:06.427298   66615 kubeconfig.go:62] /home/jenkins/minikube-integration/18774-7754/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-919612" cluster setting kubeconfig missing "old-k8s-version-919612" context setting]
	I0429 20:06:06.428287   66615 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:06.429908   66615 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:06.443630   66615 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.240
	I0429 20:06:06.443674   66615 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:06.443686   66615 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:06.443753   66615 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:06.486251   66615 cri.go:89] found id: ""
	I0429 20:06:06.486339   66615 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:06.507136   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:06.523798   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:06.523828   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:06.523887   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:06.536668   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:06.536735   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:06.547800   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:06.560435   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:06.560517   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:06.572227   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.582772   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:06.582825   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:06.594168   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:06.605940   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:06.606013   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:06.621829   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:06.637520   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:06.779910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:07.921143   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.141191032s)
	I0429 20:06:07.921178   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.172381   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.276243   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:08.398312   66615 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:08.398424   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:08.899388   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.399344   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:09.898731   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:07.168679   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:07.169214   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:07.169264   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:07.169146   67743 retry.go:31] will retry after 2.050354993s: waiting for machine to come up
	I0429 20:06:09.221915   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:09.222545   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:09.222581   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:09.222449   67743 retry.go:31] will retry after 2.544889222s: waiting for machine to come up
	I0429 20:06:07.947247   66218 pod_ready.go:102] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:10.449364   66218 pod_ready.go:102] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:10.943731   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:10.943754   66218 pod_ready.go:81] duration metric: took 5.006367348s for pod "coredns-7db6d8ff4d-d4kwk" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:10.943763   66218 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.453825   66218 pod_ready.go:92] pod "etcd-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.453853   66218 pod_ready.go:81] duration metric: took 1.510082371s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.453865   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.462971   66218 pod_ready.go:92] pod "kube-apiserver-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.462997   66218 pod_ready.go:81] duration metric: took 9.123374ms for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.463011   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.471032   66218 pod_ready.go:92] pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.471066   66218 pod_ready.go:81] duration metric: took 8.024113ms for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.471077   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-slnph" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.478671   66218 pod_ready.go:92] pod "kube-proxy-slnph" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.478695   66218 pod_ready.go:81] duration metric: took 7.609313ms for pod "kube-proxy-slnph" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.478706   66218 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.542851   66218 pod_ready.go:92] pod "kube-scheduler-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:12.542875   66218 pod_ready.go:81] duration metric: took 64.16109ms for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:12.542888   66218 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:10.399055   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:10.898742   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.399250   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.898511   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.399301   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:12.899399   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.399242   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:13.899417   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.398526   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:14.898976   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:11.768576   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:11.768967   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | unable to find current IP address of domain default-k8s-diff-port-866143 in network mk-default-k8s-diff-port-866143
	I0429 20:06:11.769003   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | I0429 20:06:11.768924   67743 retry.go:31] will retry after 3.829285986s: waiting for machine to come up
	I0429 20:06:17.032004   65980 start.go:364] duration metric: took 56.727982697s to acquireMachinesLock for "embed-certs-161370"
	I0429 20:06:17.032074   65980 start.go:96] Skipping create...Using existing machine configuration
	I0429 20:06:17.032085   65980 fix.go:54] fixHost starting: 
	I0429 20:06:17.032452   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:17.032485   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:17.050767   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0429 20:06:17.051181   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:17.051655   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:06:17.051680   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:17.052002   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:17.052188   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:17.052363   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:06:17.053975   65980 fix.go:112] recreateIfNeeded on embed-certs-161370: state=Stopped err=<nil>
	I0429 20:06:17.054002   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	W0429 20:06:17.054167   65980 fix.go:138] unexpected machine state, will restart: <nil>
	I0429 20:06:17.056054   65980 out.go:177] * Restarting existing kvm2 VM for "embed-certs-161370" ...
	I0429 20:06:14.550615   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:17.050288   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:17.057452   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Start
	I0429 20:06:17.057630   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring networks are active...
	I0429 20:06:17.058381   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring network default is active
	I0429 20:06:17.058680   65980 main.go:141] libmachine: (embed-certs-161370) Ensuring network mk-embed-certs-161370 is active
	I0429 20:06:17.059024   65980 main.go:141] libmachine: (embed-certs-161370) Getting domain xml...
	I0429 20:06:17.059697   65980 main.go:141] libmachine: (embed-certs-161370) Creating domain...
	I0429 20:06:15.599423   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.599897   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has current primary IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.599915   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Found IP for machine: 192.168.61.106
	I0429 20:06:15.599929   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Reserving static IP address...
	I0429 20:06:15.600318   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Reserved static IP address: 192.168.61.106
	I0429 20:06:15.600360   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-866143", mac: "52:54:00:af:de:09", ip: "192.168.61.106"} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.600375   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Waiting for SSH to be available...
	I0429 20:06:15.600405   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | skip adding static IP to network mk-default-k8s-diff-port-866143 - found existing host DHCP lease matching {name: "default-k8s-diff-port-866143", mac: "52:54:00:af:de:09", ip: "192.168.61.106"}
	I0429 20:06:15.600423   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Getting to WaitForSSH function...
	I0429 20:06:15.602983   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.603379   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.603414   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.603581   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Using SSH client type: external
	I0429 20:06:15.603611   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa (-rw-------)
	I0429 20:06:15.603675   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:06:15.603701   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | About to run SSH command:
	I0429 20:06:15.603733   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | exit 0
	I0429 20:06:15.734933   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | SSH cmd err, output: <nil>: 
	I0429 20:06:15.735306   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetConfigRaw
	I0429 20:06:15.735918   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:15.738878   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.739349   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.739385   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.739745   66875 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/config.json ...
	I0429 20:06:15.739943   66875 machine.go:94] provisionDockerMachine start ...
	I0429 20:06:15.739966   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:15.740215   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.742731   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.743068   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.743097   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.743253   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.743448   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.743592   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.743729   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.743859   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.744066   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.744080   66875 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:06:15.855258   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:06:15.855292   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:15.855585   66875 buildroot.go:166] provisioning hostname "default-k8s-diff-port-866143"
	I0429 20:06:15.855604   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:15.855792   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.858278   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.858644   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.858672   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.858802   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.858996   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.859179   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.859327   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.859498   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.859667   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.859682   66875 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-866143 && echo "default-k8s-diff-port-866143" | sudo tee /etc/hostname
	I0429 20:06:15.986031   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-866143
	
	I0429 20:06:15.986094   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:15.989211   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.989633   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:15.989666   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:15.989858   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:15.990078   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.990281   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:15.990441   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:15.990591   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:15.990746   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:15.990763   66875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-866143' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-866143/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-866143' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:06:16.119358   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:06:16.119389   66875 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:06:16.119420   66875 buildroot.go:174] setting up certificates
	I0429 20:06:16.119431   66875 provision.go:84] configureAuth start
	I0429 20:06:16.119442   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetMachineName
	I0429 20:06:16.119741   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:16.122611   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.122991   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.123016   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.123180   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.125378   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.125673   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.125713   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.125805   66875 provision.go:143] copyHostCerts
	I0429 20:06:16.125883   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:06:16.125896   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:06:16.125963   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:06:16.126112   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:06:16.126125   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:06:16.126152   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:06:16.126234   66875 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:06:16.126245   66875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:06:16.126270   66875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:06:16.126348   66875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-866143 san=[127.0.0.1 192.168.61.106 default-k8s-diff-port-866143 localhost minikube]
	I0429 20:06:16.280583   66875 provision.go:177] copyRemoteCerts
	I0429 20:06:16.280641   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:06:16.280665   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.283452   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.283760   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.283800   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.283999   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.284175   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.284335   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.284428   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:16.374564   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:06:16.408695   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0429 20:06:16.441975   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 20:06:16.470921   66875 provision.go:87] duration metric: took 351.479703ms to configureAuth
	I0429 20:06:16.470946   66875 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:06:16.471124   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:16.471205   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.473799   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.474105   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.474139   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.474291   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.474502   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.474692   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.474830   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.474995   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:16.475152   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:16.475167   66875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:06:16.774044   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:06:16.774093   66875 machine.go:97] duration metric: took 1.034135495s to provisionDockerMachine
	I0429 20:06:16.774108   66875 start.go:293] postStartSetup for "default-k8s-diff-port-866143" (driver="kvm2")
	I0429 20:06:16.774123   66875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:06:16.774148   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:16.774509   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:06:16.774539   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.777163   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.777603   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.777639   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.777779   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.777949   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.778109   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.778259   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:16.866104   66875 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:06:16.870760   66875 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:06:16.870780   66875 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:06:16.870839   66875 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:06:16.870916   66875 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:06:16.871003   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:06:16.881137   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:16.911284   66875 start.go:296] duration metric: took 137.163661ms for postStartSetup
	I0429 20:06:16.911318   66875 fix.go:56] duration metric: took 20.332102679s for fixHost
	I0429 20:06:16.911337   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:16.914440   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.914810   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:16.914838   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:16.915087   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:16.915287   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.915511   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:16.915692   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:16.915886   66875 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:16.916034   66875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.106 22 <nil> <nil>}
	I0429 20:06:16.916045   66875 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:06:17.031867   66875 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421177.003309274
	
	I0429 20:06:17.031892   66875 fix.go:216] guest clock: 1714421177.003309274
	I0429 20:06:17.031900   66875 fix.go:229] Guest: 2024-04-29 20:06:17.003309274 +0000 UTC Remote: 2024-04-29 20:06:16.911322778 +0000 UTC m=+211.453402116 (delta=91.986496ms)
	I0429 20:06:17.031921   66875 fix.go:200] guest clock delta is within tolerance: 91.986496ms
	I0429 20:06:17.031928   66875 start.go:83] releasing machines lock for "default-k8s-diff-port-866143", held for 20.452741912s
	I0429 20:06:17.031957   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.032261   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:17.035096   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.035467   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.035497   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.035620   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036246   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036425   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:17.036515   66875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:06:17.036569   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:17.036698   66875 ssh_runner.go:195] Run: cat /version.json
	I0429 20:06:17.036726   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:17.039300   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039595   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039813   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.039848   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.039907   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:17.039984   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:17.040017   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:17.040069   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:17.040172   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:17.040230   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:17.040329   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:17.040382   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:17.040483   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:17.040636   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:17.137510   66875 ssh_runner.go:195] Run: systemctl --version
	I0429 20:06:17.160834   66875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:06:17.320792   66875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:06:17.328367   66875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:06:17.328448   66875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:06:17.349698   66875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:06:17.349724   66875 start.go:494] detecting cgroup driver to use...
	I0429 20:06:17.349807   66875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:06:17.372156   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:06:17.388142   66875 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:06:17.388206   66875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:06:17.406108   66875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:06:17.422323   66875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:06:17.555079   66875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:06:17.727126   66875 docker.go:233] disabling docker service ...
	I0429 20:06:17.727194   66875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:06:17.743136   66875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:06:17.757045   66875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:06:17.885705   66875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:06:18.021993   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:06:18.039020   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:06:18.063267   66875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:06:18.063330   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.076473   66875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:06:18.076545   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.089566   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.102912   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.116940   66875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:06:18.130940   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.150505   66875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.177724   66875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:18.191088   66875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:06:18.203560   66875 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:06:18.203635   66875 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:06:18.221087   66875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:06:18.233719   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:18.383406   66875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:06:18.543941   66875 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:06:18.544029   66875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:06:18.550828   66875 start.go:562] Will wait 60s for crictl version
	I0429 20:06:18.550891   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:06:18.556158   66875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:06:18.607004   66875 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:06:18.607083   66875 ssh_runner.go:195] Run: crio --version
	I0429 20:06:18.638282   66875 ssh_runner.go:195] Run: crio --version
	I0429 20:06:18.674135   66875 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:06:15.399474   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:15.899352   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:16.899106   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:17.899205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.399351   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.899319   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.399303   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:19.898824   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:18.675590   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetIP
	I0429 20:06:18.678673   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:18.679055   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:18.679096   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:18.679272   66875 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0429 20:06:18.685110   66875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:18.705804   66875 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:06:18.705967   66875 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:06:18.706036   66875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:18.750754   66875 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:06:18.750823   66875 ssh_runner.go:195] Run: which lz4
	I0429 20:06:18.755893   66875 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:06:18.760892   66875 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:06:18.760921   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:06:19.055680   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:21.552080   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:18.301855   65980 main.go:141] libmachine: (embed-certs-161370) Waiting to get IP...
	I0429 20:06:18.302804   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.303231   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.303273   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.303198   67921 retry.go:31] will retry after 279.123731ms: waiting for machine to come up
	I0429 20:06:18.584013   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.584661   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.584703   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.584630   67921 retry.go:31] will retry after 239.910483ms: waiting for machine to come up
	I0429 20:06:18.825978   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:18.826393   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:18.826425   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:18.826349   67921 retry.go:31] will retry after 312.324444ms: waiting for machine to come up
	I0429 20:06:19.139999   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:19.140583   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:19.140611   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:19.140535   67921 retry.go:31] will retry after 498.525047ms: waiting for machine to come up
	I0429 20:06:19.640244   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:19.640797   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:19.640828   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:19.640756   67921 retry.go:31] will retry after 479.301061ms: waiting for machine to come up
	I0429 20:06:20.121396   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:20.121982   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:20.122015   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:20.121941   67921 retry.go:31] will retry after 706.389673ms: waiting for machine to come up
	I0429 20:06:20.829691   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:20.830191   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:20.830247   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:20.830166   67921 retry.go:31] will retry after 1.145397308s: waiting for machine to come up
	I0429 20:06:21.977290   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:21.977747   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:21.977779   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:21.977691   67921 retry.go:31] will retry after 955.977029ms: waiting for machine to come up
	I0429 20:06:20.399233   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.898571   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.398855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:21.898885   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.399328   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:22.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.398965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:23.899248   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.398833   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:24.899039   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:20.561047   66875 crio.go:462] duration metric: took 1.805186908s to copy over tarball
	I0429 20:06:20.561137   66875 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:23.264543   66875 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.703371921s)
	I0429 20:06:23.264573   66875 crio.go:469] duration metric: took 2.7034954s to extract the tarball
	I0429 20:06:23.264581   66875 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:23.303558   66875 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:23.356825   66875 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 20:06:23.356854   66875 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:06:23.356873   66875 kubeadm.go:928] updating node { 192.168.61.106 8444 v1.30.0 crio true true} ...
	I0429 20:06:23.357007   66875 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-866143 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:23.357105   66875 ssh_runner.go:195] Run: crio config
	I0429 20:06:23.414195   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:06:23.414225   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:23.414237   66875 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:23.414267   66875 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.106 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-866143 NodeName:default-k8s-diff-port-866143 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:06:23.414459   66875 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.106
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-866143"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:23.414524   66875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:06:23.425977   66875 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:23.426089   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:23.437270   66875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0429 20:06:23.457613   66875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:23.479383   66875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0429 20:06:23.509517   66875 ssh_runner.go:195] Run: grep 192.168.61.106	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:23.514202   66875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:23.528721   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:23.666941   66875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:23.687710   66875 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143 for IP: 192.168.61.106
	I0429 20:06:23.687745   66875 certs.go:194] generating shared ca certs ...
	I0429 20:06:23.687768   66875 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:23.687952   66875 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:23.688005   66875 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:23.688020   66875 certs.go:256] generating profile certs ...
	I0429 20:06:23.688168   66875 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.key
	I0429 20:06:23.688260   66875 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.key.5d7fbd4b
	I0429 20:06:23.688318   66875 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.key
	I0429 20:06:23.688481   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:23.688532   66875 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:23.688548   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:23.688592   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:23.688628   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:23.688663   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:23.688722   66875 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:23.689611   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:23.743834   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:23.783115   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:23.819086   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:23.850794   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0429 20:06:23.882477   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:23.918607   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:23.947837   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:06:23.977241   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:24.005902   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:24.034910   66875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:24.064119   66875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:24.083879   66875 ssh_runner.go:195] Run: openssl version
	I0429 20:06:24.090651   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:24.104929   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.110955   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.111034   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:24.117914   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:24.131076   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:24.144790   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.150842   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.150926   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:24.157842   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:24.171737   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:24.186164   66875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.191924   66875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.191995   66875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:24.199385   66875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:24.213392   66875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:24.219369   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:24.226784   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:24.234655   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:24.242406   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:24.249904   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:24.257400   66875 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:24.264165   66875 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-866143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-866143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:24.264290   66875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:24.264353   66875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:24.310126   66875 cri.go:89] found id: ""
	I0429 20:06:24.310197   66875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:24.322134   66875 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:24.322155   66875 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:24.322160   66875 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:24.322223   66875 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:24.337713   66875 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:24.339184   66875 kubeconfig.go:125] found "default-k8s-diff-port-866143" server: "https://192.168.61.106:8444"
	I0429 20:06:24.342237   66875 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:24.353500   66875 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.106
	I0429 20:06:24.353545   66875 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:24.353560   66875 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:24.353627   66875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:24.399835   66875 cri.go:89] found id: ""
	I0429 20:06:24.399918   66875 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:24.426456   66875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:24.440261   66875 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:24.440282   66875 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:24.440376   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0429 20:06:24.450699   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:24.450766   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:24.462870   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0429 20:06:24.474894   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:24.474961   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:24.488607   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0429 20:06:24.499626   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:24.499685   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:24.514156   66875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0429 20:06:24.525958   66875 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:24.526018   66875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:24.537063   66875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:24.548503   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:24.687916   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:24.051367   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:26.550970   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:22.935362   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:22.935797   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:22.935827   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:22.935746   67921 retry.go:31] will retry after 1.25494649s: waiting for machine to come up
	I0429 20:06:24.192017   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:24.192613   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:24.192641   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:24.192556   67921 retry.go:31] will retry after 1.641885834s: waiting for machine to come up
	I0429 20:06:25.836686   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:25.837170   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:25.837193   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:25.837125   67921 retry.go:31] will retry after 2.794216099s: waiting for machine to come up
	I0429 20:06:25.398515   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:25.898944   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.399360   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.899294   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.399520   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.899434   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.398734   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:28.898479   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.399413   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:29.899236   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:26.234143   66875 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.546180467s)
	I0429 20:06:26.234181   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.502030   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.577778   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:26.689836   66875 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:26.689982   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.190231   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.690207   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:27.729434   66875 api_server.go:72] duration metric: took 1.039599386s to wait for apiserver process to appear ...
	I0429 20:06:27.729473   66875 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:06:27.729497   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:27.730016   66875 api_server.go:269] stopped: https://192.168.61.106:8444/healthz: Get "https://192.168.61.106:8444/healthz": dial tcp 192.168.61.106:8444: connect: connection refused
	I0429 20:06:28.230353   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:28.551049   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:31.051387   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:31.411151   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:31.411188   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:31.411205   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:31.424074   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:31.424106   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:31.729916   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:31.737269   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:31.737299   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:32.229834   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:32.237900   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:32.237935   66875 api_server.go:103] status: https://192.168.61.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:32.730529   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:06:32.735043   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 200:
	ok
	I0429 20:06:32.743999   66875 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:32.744026   66875 api_server.go:131] duration metric: took 5.014546615s to wait for apiserver health ...
	I0429 20:06:32.744035   66875 cni.go:84] Creating CNI manager for ""
	I0429 20:06:32.744041   66875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:32.745889   66875 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:28.633451   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:28.633950   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:28.633979   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:28.633906   67921 retry.go:31] will retry after 2.251092878s: waiting for machine to come up
	I0429 20:06:30.887722   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:30.888251   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:30.888283   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:30.888208   67921 retry.go:31] will retry after 2.941721217s: waiting for machine to come up
	I0429 20:06:32.747198   66875 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:32.760578   66875 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:32.786719   66875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:32.797795   66875 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:32.797830   66875 system_pods.go:61] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:32.797838   66875 system_pods.go:61] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:32.797844   66875 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:32.797854   66875 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:32.797859   66875 system_pods.go:61] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0429 20:06:32.797866   66875 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:32.797873   66875 system_pods.go:61] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:32.797880   66875 system_pods.go:61] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0429 20:06:32.797888   66875 system_pods.go:74] duration metric: took 11.14839ms to wait for pod list to return data ...
	I0429 20:06:32.797902   66875 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:32.801888   66875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:32.801909   66875 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:32.801918   66875 node_conditions.go:105] duration metric: took 4.010782ms to run NodePressure ...
	I0429 20:06:32.801934   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:33.088679   66875 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:33.094165   66875 kubeadm.go:733] kubelet initialised
	I0429 20:06:33.094185   66875 kubeadm.go:734] duration metric: took 5.479589ms waiting for restarted kubelet to initialise ...
	I0429 20:06:33.094192   66875 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:33.101524   66875 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.106879   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.106911   66875 pod_ready.go:81] duration metric: took 5.352162ms for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.106923   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.106946   66875 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.111446   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.111469   66875 pod_ready.go:81] duration metric: took 4.507858ms for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.111478   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.111483   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.115613   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.115643   66875 pod_ready.go:81] duration metric: took 4.152743ms for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.115654   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.115663   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.191660   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.191695   66875 pod_ready.go:81] duration metric: took 76.012388ms for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.191707   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.191713   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.592489   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-proxy-zddtx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.592522   66875 pod_ready.go:81] duration metric: took 400.801861ms for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.592535   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-proxy-zddtx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.592544   66875 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:33.990624   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.990655   66875 pod_ready.go:81] duration metric: took 398.101779ms for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:33.990667   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:33.990673   66875 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:34.391120   66875 pod_ready.go:97] node "default-k8s-diff-port-866143" hosting pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:34.391148   66875 pod_ready.go:81] duration metric: took 400.467456ms for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	E0429 20:06:34.391165   66875 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-866143" hosting pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:34.391173   66875 pod_ready.go:38] duration metric: took 1.296972775s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:34.391191   66875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:06:34.408817   66875 ops.go:34] apiserver oom_adj: -16
	I0429 20:06:34.408845   66875 kubeadm.go:591] duration metric: took 10.086677852s to restartPrimaryControlPlane
	I0429 20:06:34.408856   66875 kubeadm.go:393] duration metric: took 10.144698168s to StartCluster
	I0429 20:06:34.408876   66875 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:34.408961   66875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:06:34.411093   66875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:34.411379   66875 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.106 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:06:34.413055   66875 out.go:177] * Verifying Kubernetes components...
	I0429 20:06:34.411518   66875 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:06:34.411607   66875 config.go:182] Loaded profile config "default-k8s-diff-port-866143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:34.414229   66875 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414239   66875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:34.414261   66875 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-866143"
	I0429 20:06:34.414238   66875 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414232   66875 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-866143"
	I0429 20:06:34.414341   66875 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.414355   66875 addons.go:243] addon metrics-server should already be in state true
	I0429 20:06:34.414382   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.414381   66875 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.414396   66875 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:06:34.414439   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.414650   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414677   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.414746   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414758   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.414890   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.414923   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.433279   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I0429 20:06:34.433827   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.434444   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.434474   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.434873   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.435436   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.435483   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.435739   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46105
	I0429 20:06:34.435746   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0429 20:06:34.436117   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.436245   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.436638   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.436678   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.436734   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.436747   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.437011   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.437057   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.437218   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.437601   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.437630   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.441092   66875 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-866143"
	W0429 20:06:34.441118   66875 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:06:34.441146   66875 host.go:66] Checking if "default-k8s-diff-port-866143" exists ...
	I0429 20:06:34.441550   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.441582   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.451571   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42563
	I0429 20:06:34.452041   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.452627   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.452650   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.453080   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.453401   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.455145   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0429 20:06:34.455335   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.457339   66875 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:06:34.455992   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.456826   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
	I0429 20:06:34.458912   66875 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:06:34.458925   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:06:34.458942   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.459155   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.459818   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.459836   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.460050   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.460068   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.460196   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.460406   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.460450   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.461005   66875 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:06:34.461051   66875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:06:34.462529   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.462624   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.464140   66875 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:06:30.398730   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:30.898542   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.399309   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:31.898751   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.399374   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:32.899262   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.398723   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:33.899281   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.399356   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.899305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:34.463014   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.463255   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.465585   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.465598   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:06:34.465623   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:06:34.465652   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.465703   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.465892   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.466043   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.468951   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.469342   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.469407   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.469645   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.469817   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.469984   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.470137   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.484411   66875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0429 20:06:34.484864   66875 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:06:34.485366   66875 main.go:141] libmachine: Using API Version  1
	I0429 20:06:34.485396   66875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:06:34.485759   66875 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:06:34.485937   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetState
	I0429 20:06:34.487715   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .DriverName
	I0429 20:06:34.487962   66875 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:06:34.487975   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:06:34.487989   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHHostname
	I0429 20:06:34.490407   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.490724   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:de:09", ip: ""} in network mk-default-k8s-diff-port-866143: {Iface:virbr3 ExpiryTime:2024-04-29 21:06:09 +0000 UTC Type:0 Mac:52:54:00:af:de:09 Iaid: IPaddr:192.168.61.106 Prefix:24 Hostname:default-k8s-diff-port-866143 Clientid:01:52:54:00:af:de:09}
	I0429 20:06:34.490748   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | domain default-k8s-diff-port-866143 has defined IP address 192.168.61.106 and MAC address 52:54:00:af:de:09 in network mk-default-k8s-diff-port-866143
	I0429 20:06:34.490890   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHPort
	I0429 20:06:34.491045   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHKeyPath
	I0429 20:06:34.491146   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .GetSSHUsername
	I0429 20:06:34.491274   66875 sshutil.go:53] new ssh client: &{IP:192.168.61.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/default-k8s-diff-port-866143/id_rsa Username:docker}
	I0429 20:06:34.618088   66875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:34.638582   66875 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-866143" to be "Ready" ...
	I0429 20:06:34.729046   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:06:34.729633   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:06:34.729649   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:06:34.752200   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:06:34.770107   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:06:34.770143   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:06:34.847081   66875 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:06:34.847117   66875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:06:34.889992   66875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:06:35.821090   66875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.091986938s)
	I0429 20:06:35.821127   66875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.068905753s)
	I0429 20:06:35.821145   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821150   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821157   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821162   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821490   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821505   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821514   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821524   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821528   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821534   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.821549   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821540   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.821902   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.821923   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.821936   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.822007   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.822024   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.828303   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.828348   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.828591   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.828606   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.828632   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.843540   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.843566   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.843860   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.843877   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.843886   66875 main.go:141] libmachine: Making call to close driver server
	I0429 20:06:35.843894   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) Calling .Close
	I0429 20:06:35.844127   66875 main.go:141] libmachine: (default-k8s-diff-port-866143) DBG | Closing plugin on server side
	I0429 20:06:35.844170   66875 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:06:35.844188   66875 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:06:35.844203   66875 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-866143"
	I0429 20:06:35.846214   66875 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0429 20:06:33.549917   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:35.550564   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:33.831181   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:33.831552   65980 main.go:141] libmachine: (embed-certs-161370) DBG | unable to find current IP address of domain embed-certs-161370 in network mk-embed-certs-161370
	I0429 20:06:33.831581   65980 main.go:141] libmachine: (embed-certs-161370) DBG | I0429 20:06:33.831506   67921 retry.go:31] will retry after 5.040485428s: waiting for machine to come up
	I0429 20:06:35.399419   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.899244   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.398934   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:36.898847   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.399273   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:37.899102   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.398748   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:38.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.399524   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:39.898813   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:35.847674   66875 addons.go:505] duration metric: took 1.436173952s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0429 20:06:36.641963   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:38.642738   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:38.873188   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.873625   65980 main.go:141] libmachine: (embed-certs-161370) Found IP for machine: 192.168.50.184
	I0429 20:06:38.873653   65980 main.go:141] libmachine: (embed-certs-161370) Reserving static IP address...
	I0429 20:06:38.873669   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has current primary IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.874037   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "embed-certs-161370", mac: "52:54:00:e6:05:1f", ip: "192.168.50.184"} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:38.874091   65980 main.go:141] libmachine: (embed-certs-161370) Reserved static IP address: 192.168.50.184
	I0429 20:06:38.874113   65980 main.go:141] libmachine: (embed-certs-161370) DBG | skip adding static IP to network mk-embed-certs-161370 - found existing host DHCP lease matching {name: "embed-certs-161370", mac: "52:54:00:e6:05:1f", ip: "192.168.50.184"}
	I0429 20:06:38.874132   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Getting to WaitForSSH function...
	I0429 20:06:38.874151   65980 main.go:141] libmachine: (embed-certs-161370) Waiting for SSH to be available...
	I0429 20:06:38.875891   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.876205   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:38.876237   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:38.876401   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Using SSH client type: external
	I0429 20:06:38.876425   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Using SSH private key: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa (-rw-------)
	I0429 20:06:38.876455   65980 main.go:141] libmachine: (embed-certs-161370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0429 20:06:38.876475   65980 main.go:141] libmachine: (embed-certs-161370) DBG | About to run SSH command:
	I0429 20:06:38.876486   65980 main.go:141] libmachine: (embed-certs-161370) DBG | exit 0
	I0429 20:06:39.006684   65980 main.go:141] libmachine: (embed-certs-161370) DBG | SSH cmd err, output: <nil>: 
	I0429 20:06:39.007072   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetConfigRaw
	I0429 20:06:39.007701   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:39.010189   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.010539   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.010577   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.010783   65980 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/config.json ...
	I0429 20:06:39.010970   65980 machine.go:94] provisionDockerMachine start ...
	I0429 20:06:39.010986   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:39.011196   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.013422   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.013832   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.013862   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.013986   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.014183   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.014377   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.014528   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.014710   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.014868   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.014878   65980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0429 20:06:39.119151   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0429 20:06:39.119183   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.119425   65980 buildroot.go:166] provisioning hostname "embed-certs-161370"
	I0429 20:06:39.119449   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.119606   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.122418   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.122725   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.122755   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.122894   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.123087   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.123235   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.123371   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.123547   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.123719   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.123734   65980 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-161370 && echo "embed-certs-161370" | sudo tee /etc/hostname
	I0429 20:06:39.247323   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-161370
	
	I0429 20:06:39.247360   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.250202   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.250594   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.250623   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.250761   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.250956   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.251158   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.251354   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.251536   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.251724   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.251746   65980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-161370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-161370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-161370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 20:06:39.370366   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 20:06:39.370395   65980 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18774-7754/.minikube CaCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18774-7754/.minikube}
	I0429 20:06:39.370415   65980 buildroot.go:174] setting up certificates
	I0429 20:06:39.370429   65980 provision.go:84] configureAuth start
	I0429 20:06:39.370441   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetMachineName
	I0429 20:06:39.370754   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:39.373600   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.373977   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.374011   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.374305   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.376654   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.376999   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.377032   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.377156   65980 provision.go:143] copyHostCerts
	I0429 20:06:39.377217   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem, removing ...
	I0429 20:06:39.377228   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem
	I0429 20:06:39.377279   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/ca.pem (1082 bytes)
	I0429 20:06:39.377367   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem, removing ...
	I0429 20:06:39.377375   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem
	I0429 20:06:39.377393   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/cert.pem (1123 bytes)
	I0429 20:06:39.377446   65980 exec_runner.go:144] found /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem, removing ...
	I0429 20:06:39.377453   65980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem
	I0429 20:06:39.377470   65980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18774-7754/.minikube/key.pem (1675 bytes)
	I0429 20:06:39.377523   65980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem org=jenkins.embed-certs-161370 san=[127.0.0.1 192.168.50.184 embed-certs-161370 localhost minikube]
	I0429 20:06:39.441865   65980 provision.go:177] copyRemoteCerts
	I0429 20:06:39.441931   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 20:06:39.441954   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.445189   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.445633   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.445677   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.445918   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.446166   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.446364   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.446521   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:39.535703   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 20:06:39.571033   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0429 20:06:39.604181   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 20:06:39.639250   65980 provision.go:87] duration metric: took 268.808275ms to configureAuth
	I0429 20:06:39.639339   65980 buildroot.go:189] setting minikube options for container-runtime
	I0429 20:06:39.639575   65980 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:06:39.639668   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.642544   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.642975   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.643006   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.643146   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.643348   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.643507   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.643671   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.643838   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:39.644011   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:39.644039   65980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0429 20:06:39.974134   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0429 20:06:39.974168   65980 machine.go:97] duration metric: took 963.184467ms to provisionDockerMachine
	I0429 20:06:39.974186   65980 start.go:293] postStartSetup for "embed-certs-161370" (driver="kvm2")
	I0429 20:06:39.974201   65980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 20:06:39.974229   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:39.974601   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 20:06:39.974636   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:39.977843   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.978295   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:39.978328   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:39.978528   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:39.978768   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:39.978939   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:39.979144   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.066379   65980 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 20:06:40.071720   65980 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 20:06:40.071742   65980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/addons for local assets ...
	I0429 20:06:40.071798   65980 filesync.go:126] Scanning /home/jenkins/minikube-integration/18774-7754/.minikube/files for local assets ...
	I0429 20:06:40.071875   65980 filesync.go:149] local asset: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem -> 151242.pem in /etc/ssl/certs
	I0429 20:06:40.071965   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 20:06:40.082556   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:40.112774   65980 start.go:296] duration metric: took 138.571139ms for postStartSetup
	I0429 20:06:40.112827   65980 fix.go:56] duration metric: took 23.080734046s for fixHost
	I0429 20:06:40.112859   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.115931   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.116414   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.116448   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.116643   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.116859   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.117026   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.117169   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.117358   65980 main.go:141] libmachine: Using SSH client type: native
	I0429 20:06:40.117560   65980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0429 20:06:40.117576   65980 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 20:06:40.223697   65980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714421200.206855033
	
	I0429 20:06:40.223722   65980 fix.go:216] guest clock: 1714421200.206855033
	I0429 20:06:40.223732   65980 fix.go:229] Guest: 2024-04-29 20:06:40.206855033 +0000 UTC Remote: 2024-04-29 20:06:40.112832003 +0000 UTC m=+362.399028562 (delta=94.02303ms)
	I0429 20:06:40.223777   65980 fix.go:200] guest clock delta is within tolerance: 94.02303ms
	I0429 20:06:40.223782   65980 start.go:83] releasing machines lock for "embed-certs-161370", held for 23.191744513s
	I0429 20:06:40.223804   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.224106   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:40.226904   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.227299   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.227328   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.227462   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.227955   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.228117   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:06:40.228199   65980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 20:06:40.228238   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.228353   65980 ssh_runner.go:195] Run: cat /version.json
	I0429 20:06:40.228378   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:06:40.230943   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231151   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231370   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.231401   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231585   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.231595   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:40.231629   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:40.231794   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:06:40.231806   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.231982   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.232000   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:06:40.232182   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:06:40.232197   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.232303   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:06:40.337533   65980 ssh_runner.go:195] Run: systemctl --version
	I0429 20:06:40.347252   65980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0429 20:06:40.494668   65980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 20:06:40.502707   65980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 20:06:40.502788   65980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 20:06:40.522261   65980 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 20:06:40.522298   65980 start.go:494] detecting cgroup driver to use...
	I0429 20:06:40.522368   65980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 20:06:40.540576   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 20:06:40.557130   65980 docker.go:217] disabling cri-docker service (if available) ...
	I0429 20:06:40.557203   65980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0429 20:06:40.573803   65980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0429 20:06:40.589730   65980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0429 20:06:40.731625   65980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0429 20:06:40.902594   65980 docker.go:233] disabling docker service ...
	I0429 20:06:40.902665   65980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0429 20:06:40.921454   65980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0429 20:06:40.938734   65980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0429 20:06:41.081822   65980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0429 20:06:41.237778   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0429 20:06:41.254086   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 20:06:41.276277   65980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0429 20:06:41.276362   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.288903   65980 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0429 20:06:41.288972   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.301347   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.313639   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.325885   65980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 20:06:41.338215   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.350839   65980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.372124   65980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0429 20:06:41.385505   65980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 20:06:41.397626   65980 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0429 20:06:41.397704   65980 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0429 20:06:41.413915   65980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 20:06:41.427068   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:41.575690   65980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0429 20:06:41.748047   65980 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0429 20:06:41.748132   65980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0429 20:06:41.753313   65980 start.go:562] Will wait 60s for crictl version
	I0429 20:06:41.753379   65980 ssh_runner.go:195] Run: which crictl
	I0429 20:06:41.757672   65980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 20:06:41.794045   65980 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0429 20:06:41.794150   65980 ssh_runner.go:195] Run: crio --version
	I0429 20:06:41.831177   65980 ssh_runner.go:195] Run: crio --version
	I0429 20:06:41.865125   65980 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0429 20:06:38.049006   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:40.050003   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:42.050213   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:41.866698   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetIP
	I0429 20:06:41.869477   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:41.869815   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:06:41.869848   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:06:41.870107   65980 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0429 20:06:41.874917   65980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:41.889196   65980 kubeadm.go:877] updating cluster {Name:embed-certs-161370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 20:06:41.889353   65980 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 20:06:41.889423   65980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:41.936285   65980 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0429 20:06:41.936352   65980 ssh_runner.go:195] Run: which lz4
	I0429 20:06:41.941893   65980 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 20:06:41.947071   65980 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 20:06:41.947112   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0429 20:06:40.399024   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:40.899056   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.399275   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.899285   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.399200   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:42.899243   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.399298   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:43.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.398590   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:44.899346   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:41.143962   66875 node_ready.go:53] node "default-k8s-diff-port-866143" has status "Ready":"False"
	I0429 20:06:41.645981   66875 node_ready.go:49] node "default-k8s-diff-port-866143" has status "Ready":"True"
	I0429 20:06:41.646007   66875 node_ready.go:38] duration metric: took 7.007388661s for node "default-k8s-diff-port-866143" to be "Ready" ...
	I0429 20:06:41.646018   66875 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:41.652664   66875 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.657667   66875 pod_ready.go:92] pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.657685   66875 pod_ready.go:81] duration metric: took 4.993051ms for pod "coredns-7db6d8ff4d-7m65s" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.657694   66875 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.662632   66875 pod_ready.go:92] pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.662650   66875 pod_ready.go:81] duration metric: took 4.950519ms for pod "etcd-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.662658   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.667488   66875 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.667509   66875 pod_ready.go:81] duration metric: took 4.844299ms for pod "kube-apiserver-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.667520   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.672480   66875 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:41.672501   66875 pod_ready.go:81] duration metric: took 4.974639ms for pod "kube-controller-manager-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:41.672512   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:42.042828   66875 pod_ready.go:92] pod "kube-proxy-zddtx" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:42.042856   66875 pod_ready.go:81] duration metric: took 370.336555ms for pod "kube-proxy-zddtx" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:42.042868   66875 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.051930   66875 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:44.548970   66875 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace has status "Ready":"True"
	I0429 20:06:44.548999   66875 pod_ready.go:81] duration metric: took 2.506120519s for pod "kube-scheduler-default-k8s-diff-port-866143" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.549011   66875 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	I0429 20:06:44.051077   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:46.052233   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:43.759688   65980 crio.go:462] duration metric: took 1.817838869s to copy over tarball
	I0429 20:06:43.759784   65980 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 20:06:46.405802   65980 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.64598022s)
	I0429 20:06:46.405851   65980 crio.go:469] duration metric: took 2.646122331s to extract the tarball
	I0429 20:06:46.405861   65980 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 20:06:46.444700   65980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0429 20:06:46.503047   65980 crio.go:514] all images are preloaded for cri-o runtime.
	I0429 20:06:46.503086   65980 cache_images.go:84] Images are preloaded, skipping loading
	I0429 20:06:46.503098   65980 kubeadm.go:928] updating node { 192.168.50.184 8443 v1.30.0 crio true true} ...
	I0429 20:06:46.503234   65980 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-161370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 20:06:46.503334   65980 ssh_runner.go:195] Run: crio config
	I0429 20:06:46.563489   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:06:46.563511   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:46.563523   65980 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 20:06:46.563542   65980 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-161370 NodeName:embed-certs-161370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 20:06:46.563662   65980 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-161370"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 20:06:46.563719   65980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 20:06:46.576288   65980 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 20:06:46.576350   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0429 20:06:46.586807   65980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0429 20:06:46.605883   65980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 20:06:46.626741   65980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0429 20:06:46.647223   65980 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0429 20:06:46.652262   65980 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 20:06:46.667095   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:06:46.804937   65980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:06:46.831022   65980 certs.go:68] Setting up /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370 for IP: 192.168.50.184
	I0429 20:06:46.831048   65980 certs.go:194] generating shared ca certs ...
	I0429 20:06:46.831067   65980 certs.go:226] acquiring lock for ca certs: {Name:mke4f887d1b12d48598531109a1f9d4e6514ee8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:06:46.831251   65980 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key
	I0429 20:06:46.831295   65980 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key
	I0429 20:06:46.831306   65980 certs.go:256] generating profile certs ...
	I0429 20:06:46.831385   65980 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/client.key
	I0429 20:06:46.831440   65980 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.key.9384fac7
	I0429 20:06:46.831476   65980 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.key
	I0429 20:06:46.831582   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem (1338 bytes)
	W0429 20:06:46.831610   65980 certs.go:480] ignoring /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124_empty.pem, impossibly tiny 0 bytes
	I0429 20:06:46.831617   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 20:06:46.831635   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/ca.pem (1082 bytes)
	I0429 20:06:46.831662   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/cert.pem (1123 bytes)
	I0429 20:06:46.831691   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/certs/key.pem (1675 bytes)
	I0429 20:06:46.831729   65980 certs.go:484] found cert: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem (1708 bytes)
	I0429 20:06:46.832571   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 20:06:46.896363   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 20:06:46.939336   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 20:06:46.976256   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0429 20:06:47.007777   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0429 20:06:47.045019   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0429 20:06:47.079584   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 20:06:47.114002   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/embed-certs-161370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 20:06:47.142163   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/ssl/certs/151242.pem --> /usr/share/ca-certificates/151242.pem (1708 bytes)
	I0429 20:06:47.170063   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 20:06:47.199098   65980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18774-7754/.minikube/certs/15124.pem --> /usr/share/ca-certificates/15124.pem (1338 bytes)
	I0429 20:06:47.228985   65980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 20:06:47.250928   65980 ssh_runner.go:195] Run: openssl version
	I0429 20:06:47.258215   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/151242.pem && ln -fs /usr/share/ca-certificates/151242.pem /etc/ssl/certs/151242.pem"
	I0429 20:06:47.271653   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.277100   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 18:53 /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.277183   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/151242.pem
	I0429 20:06:47.283876   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/151242.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 20:06:47.297519   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 20:06:47.311104   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.316347   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.316408   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 20:06:47.322992   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 20:06:47.337744   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15124.pem && ln -fs /usr/share/ca-certificates/15124.pem /etc/ssl/certs/15124.pem"
	I0429 20:06:47.351332   65980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.356912   65980 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 18:53 /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.356964   65980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15124.pem
	I0429 20:06:47.363339   65980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15124.pem /etc/ssl/certs/51391683.0"
	I0429 20:06:47.378501   65980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 20:06:47.383995   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0429 20:06:47.391157   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0429 20:06:47.398039   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0429 20:06:47.405117   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0429 20:06:47.412125   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0429 20:06:47.419257   65980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0429 20:06:47.425917   65980 kubeadm.go:391] StartCluster: {Name:embed-certs-161370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-161370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 20:06:47.426009   65980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0429 20:06:47.426049   65980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:47.469133   65980 cri.go:89] found id: ""
	I0429 20:06:47.469216   65980 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0429 20:06:47.481852   65980 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0429 20:06:47.481878   65980 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0429 20:06:47.481883   65980 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0429 20:06:47.481926   65980 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0429 20:06:47.495254   65980 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0429 20:06:47.496760   65980 kubeconfig.go:125] found "embed-certs-161370" server: "https://192.168.50.184:8443"
	I0429 20:06:47.499898   65980 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0429 20:06:47.511866   65980 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.184
	I0429 20:06:47.511903   65980 kubeadm.go:1154] stopping kube-system containers ...
	I0429 20:06:47.511917   65980 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0429 20:06:47.511972   65980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0429 20:06:47.563879   65980 cri.go:89] found id: ""
	I0429 20:06:47.563956   65980 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0429 20:06:47.583490   65980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:06:47.595867   65980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:06:47.595893   65980 kubeadm.go:156] found existing configuration files:
	
	I0429 20:06:47.595947   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:06:47.608218   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:06:47.608283   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:06:47.620329   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:06:47.631394   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:06:47.631527   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:06:47.643107   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:06:47.654164   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:06:47.654233   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:06:47.665890   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:06:47.676817   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:06:47.676859   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:06:47.688608   65980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:06:47.700068   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:45.398908   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:45.898619   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.398795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.899058   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.399257   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:47.899269   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.398874   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:48.898653   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.399305   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:49.898855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:46.556692   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:49.056546   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:48.550949   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:50.551905   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:47.821391   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.623284   65980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.31791052s)
	I0429 20:06:49.623343   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.870630   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:49.950525   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:50.061240   65980 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:06:50.061331   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.562165   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.062299   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.139853   65980 api_server.go:72] duration metric: took 1.078602354s to wait for apiserver process to appear ...
	I0429 20:06:51.139883   65980 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:06:51.139905   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:51.140472   65980 api_server.go:269] stopped: https://192.168.50.184:8443/healthz: Get "https://192.168.50.184:8443/healthz": dial tcp 192.168.50.184:8443: connect: connection refused
	I0429 20:06:51.640813   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:50.398577   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:50.899284   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.399361   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.899134   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.399211   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:52.898733   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.399280   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:53.898915   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.399264   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:54.898840   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:51.057650   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:53.559429   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:53.049570   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:55.049866   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:57.050558   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:54.540707   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:54.540765   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:54.540797   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:54.618982   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0429 20:06:54.619016   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0429 20:06:54.640333   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:54.674491   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:54.674542   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:55.140955   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:55.157479   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:55.157517   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:55.639999   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:55.646278   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:55.646311   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:56.140938   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:56.147336   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:56.147371   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:56.640927   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:56.647027   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:56.647054   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:57.140894   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:57.145193   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:57.145236   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:57.640842   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:57.645453   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0429 20:06:57.645478   65980 api_server.go:103] status: https://192.168.50.184:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0429 20:06:58.140524   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:06:58.146317   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0429 20:06:58.153972   65980 api_server.go:141] control plane version: v1.30.0
	I0429 20:06:58.154011   65980 api_server.go:131] duration metric: took 7.014120443s to wait for apiserver health ...
	I0429 20:06:58.154028   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:06:58.154036   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:06:58.155341   65980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:06:55.398622   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:55.898563   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.399306   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.898473   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.399293   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:57.899278   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.399121   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:58.899291   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.399197   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:59.898901   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:06:56.056503   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:58.056988   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:59.053737   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:01.555480   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:06:58.156794   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:06:58.176870   65980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:06:58.215333   65980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:06:58.230619   65980 system_pods.go:59] 8 kube-system pods found
	I0429 20:06:58.230655   65980 system_pods.go:61] "coredns-7db6d8ff4d-wjfff" [bd92e456-b538-49ae-984b-c6bcea6add30] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0429 20:06:58.230667   65980 system_pods.go:61] "etcd-embed-certs-161370" [da2d022f-33c4-49b7-b997-a6783043f3e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0429 20:06:58.230675   65980 system_pods.go:61] "kube-apiserver-embed-certs-161370" [032913c9-bb91-46ba-ad4d-a4d5b63d806f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0429 20:06:58.230681   65980 system_pods.go:61] "kube-controller-manager-embed-certs-161370" [2f3ae1ff-0688-4c70-a888-5e1e640f64bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0429 20:06:58.230685   65980 system_pods.go:61] "kube-proxy-9kmg8" [01bbd2ca-24d2-4c7c-b4ea-79604ac3f486] Running
	I0429 20:06:58.230689   65980 system_pods.go:61] "kube-scheduler-embed-certs-161370" [c88ab7cc-1aef-48cb-814e-eff8e946885c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0429 20:06:58.230694   65980 system_pods.go:61] "metrics-server-569cc877fc-c4h7f" [bf1cae8d-cca1-4573-935f-e60118ca9575] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:06:58.230698   65980 system_pods.go:61] "storage-provisioner" [1686a084-f28b-4b26-b748-85a2a3613dde] Running
	I0429 20:06:58.230703   65980 system_pods.go:74] duration metric: took 15.348727ms to wait for pod list to return data ...
	I0429 20:06:58.230713   65980 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:06:58.233411   65980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:06:58.233436   65980 node_conditions.go:123] node cpu capacity is 2
	I0429 20:06:58.233447   65980 node_conditions.go:105] duration metric: took 2.729694ms to run NodePressure ...
	I0429 20:06:58.233466   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0429 20:06:58.532729   65980 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0429 20:06:58.538018   65980 kubeadm.go:733] kubelet initialised
	I0429 20:06:58.538038   65980 kubeadm.go:734] duration metric: took 5.283028ms waiting for restarted kubelet to initialise ...
	I0429 20:06:58.538046   65980 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:06:58.544267   65980 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:00.553501   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:00.398537   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.899359   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.399125   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:01.899428   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.399457   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:02.899355   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.399421   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:03.899376   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.399331   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:04.899263   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:00.555517   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:02.557429   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:05.056216   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:04.049941   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:06.051285   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:03.069330   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:03.554710   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.554732   65980 pod_ready.go:81] duration metric: took 5.010440873s for pod "coredns-7db6d8ff4d-wjfff" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.554742   65980 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.562277   65980 pod_ready.go:92] pod "etcd-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.562298   65980 pod_ready.go:81] duration metric: took 7.550156ms for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.562306   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.567038   65980 pod_ready.go:92] pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:03.567060   65980 pod_ready.go:81] duration metric: took 4.748002ms for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:03.567069   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.573632   65980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.573664   65980 pod_ready.go:81] duration metric: took 1.006574407s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.573675   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9kmg8" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.578356   65980 pod_ready.go:92] pod "kube-proxy-9kmg8" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.578377   65980 pod_ready.go:81] duration metric: took 4.694437ms for pod "kube-proxy-9kmg8" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.578388   65980 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.749703   65980 pod_ready.go:92] pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:07:04.749733   65980 pod_ready.go:81] duration metric: took 171.336391ms for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:04.749750   65980 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" ...
	I0429 20:07:06.757041   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:05.398458   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:05.899296   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.399205   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:06.899079   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.399308   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:07.898749   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:08.399182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:08.399271   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:08.448015   66615 cri.go:89] found id: ""
	I0429 20:07:08.448041   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.448049   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:08.448055   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:08.448103   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:08.491239   66615 cri.go:89] found id: ""
	I0429 20:07:08.491265   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.491274   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:08.491280   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:08.491330   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:08.541203   66615 cri.go:89] found id: ""
	I0429 20:07:08.541226   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.541234   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:08.541239   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:08.541300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:08.584370   66615 cri.go:89] found id: ""
	I0429 20:07:08.584393   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.584401   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:08.584407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:08.584469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:08.625126   66615 cri.go:89] found id: ""
	I0429 20:07:08.625158   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.625169   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:08.625182   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:08.625246   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:08.666987   66615 cri.go:89] found id: ""
	I0429 20:07:08.667018   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.667032   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:08.667039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:08.667105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:08.712363   66615 cri.go:89] found id: ""
	I0429 20:07:08.712394   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.712405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:08.712413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:08.712471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:08.762122   66615 cri.go:89] found id: ""
	I0429 20:07:08.762151   66615 logs.go:276] 0 containers: []
	W0429 20:07:08.762170   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:08.762180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:08.762196   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:08.808218   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:08.808246   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:08.867278   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:08.867317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:08.884230   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:08.884266   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:09.018183   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:09.018208   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:09.018224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:07.555443   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:09.557653   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:08.551823   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.051232   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:09.257687   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.758829   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:11.587112   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:11.603711   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:11.603781   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:11.651087   66615 cri.go:89] found id: ""
	I0429 20:07:11.651115   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.651123   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:11.651128   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:11.651192   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:11.691888   66615 cri.go:89] found id: ""
	I0429 20:07:11.691914   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.691921   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:11.691928   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:11.691976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:11.733411   66615 cri.go:89] found id: ""
	I0429 20:07:11.733441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.733452   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:11.733460   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:11.733517   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:11.774620   66615 cri.go:89] found id: ""
	I0429 20:07:11.774648   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.774659   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:11.774666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:11.774729   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:11.821410   66615 cri.go:89] found id: ""
	I0429 20:07:11.821441   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.821449   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:11.821455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:11.821502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:11.864699   66615 cri.go:89] found id: ""
	I0429 20:07:11.864730   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.864741   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:11.864749   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:11.864809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:11.904637   66615 cri.go:89] found id: ""
	I0429 20:07:11.904678   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.904687   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:11.904693   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:11.904742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:11.970914   66615 cri.go:89] found id: ""
	I0429 20:07:11.970945   66615 logs.go:276] 0 containers: []
	W0429 20:07:11.970957   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:11.970968   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:11.970984   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:12.024185   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:12.024226   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:12.040319   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:12.040349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:12.137888   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:12.137915   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:12.137941   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:12.210256   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:12.210290   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:14.758756   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:14.775321   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:14.775386   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:14.812637   66615 cri.go:89] found id: ""
	I0429 20:07:14.812662   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.812672   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:14.812679   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:14.812735   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:14.851503   66615 cri.go:89] found id: ""
	I0429 20:07:14.851536   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.851547   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:14.851554   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:14.851613   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:14.885708   66615 cri.go:89] found id: ""
	I0429 20:07:14.885739   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.885749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:14.885756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:14.885817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:14.926133   66615 cri.go:89] found id: ""
	I0429 20:07:14.926162   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.926173   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:14.926181   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:14.926240   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:12.056093   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.056500   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:13.549924   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:15.550544   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.257394   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:16.756833   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:14.967553   66615 cri.go:89] found id: ""
	I0429 20:07:14.967582   66615 logs.go:276] 0 containers: []
	W0429 20:07:14.967593   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:14.967601   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:14.967659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:15.006174   66615 cri.go:89] found id: ""
	I0429 20:07:15.006199   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.006207   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:15.006218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:15.006293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:15.046916   66615 cri.go:89] found id: ""
	I0429 20:07:15.046940   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.046947   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:15.046953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:15.047009   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:15.089229   66615 cri.go:89] found id: ""
	I0429 20:07:15.089256   66615 logs.go:276] 0 containers: []
	W0429 20:07:15.089266   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:15.089278   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:15.089298   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:15.143518   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:15.143561   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:15.162742   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:15.162769   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:15.242850   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:15.242872   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:15.242884   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:15.315783   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:15.315825   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:17.863336   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:17.877802   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:17.877869   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:17.935714   66615 cri.go:89] found id: ""
	I0429 20:07:17.935738   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.935746   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:17.935754   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:17.935810   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:17.988496   66615 cri.go:89] found id: ""
	I0429 20:07:17.988529   66615 logs.go:276] 0 containers: []
	W0429 20:07:17.988540   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:17.988547   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:17.988610   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:18.030695   66615 cri.go:89] found id: ""
	I0429 20:07:18.030726   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.030737   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:18.030745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:18.030822   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:18.077452   66615 cri.go:89] found id: ""
	I0429 20:07:18.077481   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.077491   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:18.077498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:18.077561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:18.120102   66615 cri.go:89] found id: ""
	I0429 20:07:18.120127   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.120136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:18.120141   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:18.120200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:18.163440   66615 cri.go:89] found id: ""
	I0429 20:07:18.163469   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.163480   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:18.163487   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:18.163549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:18.202650   66615 cri.go:89] found id: ""
	I0429 20:07:18.202680   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.202693   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:18.202699   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:18.202760   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:18.244378   66615 cri.go:89] found id: ""
	I0429 20:07:18.244408   66615 logs.go:276] 0 containers: []
	W0429 20:07:18.244418   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:18.244429   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:18.244446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:18.289246   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:18.289279   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:18.343382   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:18.343425   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:18.359070   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:18.359103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:18.440316   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:18.440337   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:18.440351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:16.555711   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.555851   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.051297   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:20.551594   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:18.756941   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:20.756974   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:22.757155   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:21.019552   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:21.036407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:21.036523   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:21.083148   66615 cri.go:89] found id: ""
	I0429 20:07:21.083171   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.083179   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:21.083184   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:21.083231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:21.129382   66615 cri.go:89] found id: ""
	I0429 20:07:21.129415   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.129426   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:21.129434   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:21.129502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:21.172978   66615 cri.go:89] found id: ""
	I0429 20:07:21.173007   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.173015   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:21.173020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:21.173068   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:21.218124   66615 cri.go:89] found id: ""
	I0429 20:07:21.218159   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.218171   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:21.218178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:21.218243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:21.260603   66615 cri.go:89] found id: ""
	I0429 20:07:21.260640   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.260651   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:21.260658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:21.260723   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:21.302351   66615 cri.go:89] found id: ""
	I0429 20:07:21.302386   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.302398   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:21.302407   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:21.302498   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:21.347003   66615 cri.go:89] found id: ""
	I0429 20:07:21.347028   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.347037   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:21.347043   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:21.347098   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:21.388202   66615 cri.go:89] found id: ""
	I0429 20:07:21.388236   66615 logs.go:276] 0 containers: []
	W0429 20:07:21.388245   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:21.388257   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:21.388272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:21.442706   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:21.442744   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:21.457453   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:21.457489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:21.539669   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:21.539695   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:21.539707   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:21.625210   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:21.625247   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.173256   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:24.189920   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:24.189990   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:24.236730   66615 cri.go:89] found id: ""
	I0429 20:07:24.236761   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.236772   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:24.236779   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:24.236843   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:24.279031   66615 cri.go:89] found id: ""
	I0429 20:07:24.279055   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.279062   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:24.279067   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:24.279112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:24.321622   66615 cri.go:89] found id: ""
	I0429 20:07:24.321647   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.321657   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:24.321665   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:24.321726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:24.360884   66615 cri.go:89] found id: ""
	I0429 20:07:24.360911   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.360919   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:24.360924   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:24.360983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:24.414439   66615 cri.go:89] found id: ""
	I0429 20:07:24.414463   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.414472   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:24.414477   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:24.414559   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:24.456994   66615 cri.go:89] found id: ""
	I0429 20:07:24.457023   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.457033   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:24.457041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:24.457107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:24.497991   66615 cri.go:89] found id: ""
	I0429 20:07:24.498026   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.498036   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:24.498044   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:24.498137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:24.539375   66615 cri.go:89] found id: ""
	I0429 20:07:24.539415   66615 logs.go:276] 0 containers: []
	W0429 20:07:24.539426   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:24.539438   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:24.539453   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:24.661778   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:24.661804   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:24.661820   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:24.748180   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:24.748215   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:24.795963   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:24.795999   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:24.851485   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:24.851524   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:20.556543   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:22.556775   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:24.559759   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:23.052715   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:25.550857   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.551209   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:25.256195   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.258199   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:27.367869   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:27.385633   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:27.385716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:27.423181   66615 cri.go:89] found id: ""
	I0429 20:07:27.423210   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.423222   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:27.423233   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:27.423293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:27.467385   66615 cri.go:89] found id: ""
	I0429 20:07:27.467419   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.467432   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:27.467439   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:27.467503   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:27.506171   66615 cri.go:89] found id: ""
	I0429 20:07:27.506204   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.506216   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:27.506223   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:27.506272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:27.545043   66615 cri.go:89] found id: ""
	I0429 20:07:27.545066   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.545074   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:27.545080   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:27.545136   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:27.592279   66615 cri.go:89] found id: ""
	I0429 20:07:27.592306   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.592314   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:27.592320   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:27.592379   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:27.628569   66615 cri.go:89] found id: ""
	I0429 20:07:27.628595   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.628604   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:27.628612   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:27.628659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:27.667937   66615 cri.go:89] found id: ""
	I0429 20:07:27.667967   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.667978   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:27.667985   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:27.668047   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:27.708813   66615 cri.go:89] found id: ""
	I0429 20:07:27.708844   66615 logs.go:276] 0 containers: []
	W0429 20:07:27.708853   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:27.708861   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:27.708876   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:27.789589   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:27.789625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:27.837147   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:27.837180   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:27.891928   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:27.891956   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:27.906162   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:27.906188   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:27.983738   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:27.057372   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:29.555872   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:30.049373   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:32.052279   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:29.758675   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:32.257486   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:30.484404   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:30.503968   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:30.504041   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:30.553070   66615 cri.go:89] found id: ""
	I0429 20:07:30.553099   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.553111   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:30.553118   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:30.553180   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:30.609226   66615 cri.go:89] found id: ""
	I0429 20:07:30.609253   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.609262   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:30.609267   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:30.609324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:30.658359   66615 cri.go:89] found id: ""
	I0429 20:07:30.658384   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.658395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:30.658401   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:30.658459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:30.710024   66615 cri.go:89] found id: ""
	I0429 20:07:30.710048   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.710058   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:30.710114   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:30.710173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:30.752361   66615 cri.go:89] found id: ""
	I0429 20:07:30.752388   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.752398   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:30.752405   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:30.752469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:30.793311   66615 cri.go:89] found id: ""
	I0429 20:07:30.793333   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.793341   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:30.793347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:30.793394   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:30.832371   66615 cri.go:89] found id: ""
	I0429 20:07:30.832400   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.832411   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:30.832417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:30.832469   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:30.871183   66615 cri.go:89] found id: ""
	I0429 20:07:30.871215   66615 logs.go:276] 0 containers: []
	W0429 20:07:30.871226   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:30.871237   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:30.871253   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:30.929909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:30.929947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:30.944454   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:30.944482   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:31.022060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:31.022100   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:31.022116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:31.104142   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:31.104185   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:33.651167   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:33.667888   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:33.667948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:33.708455   66615 cri.go:89] found id: ""
	I0429 20:07:33.708484   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.708495   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:33.708502   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:33.708561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:33.747578   66615 cri.go:89] found id: ""
	I0429 20:07:33.747602   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.747611   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:33.747616   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:33.747661   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:33.796005   66615 cri.go:89] found id: ""
	I0429 20:07:33.796036   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.796056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:33.796064   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:33.796128   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:33.836238   66615 cri.go:89] found id: ""
	I0429 20:07:33.836263   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.836271   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:33.836276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:33.836324   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:33.877010   66615 cri.go:89] found id: ""
	I0429 20:07:33.877043   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.877056   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:33.877065   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:33.877137   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:33.919690   66615 cri.go:89] found id: ""
	I0429 20:07:33.919714   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.919722   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:33.919727   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:33.919797   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:33.959857   66615 cri.go:89] found id: ""
	I0429 20:07:33.959889   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.959900   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:33.959907   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:33.959989   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:33.996349   66615 cri.go:89] found id: ""
	I0429 20:07:33.996376   66615 logs.go:276] 0 containers: []
	W0429 20:07:33.996386   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:33.996396   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:33.996433   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:34.010773   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:34.010808   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:34.091581   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:34.091599   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:34.091611   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:34.173266   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:34.173299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:34.221447   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:34.221479   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:32.055352   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.056364   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.550100   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.550663   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:34.756264   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.756583   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:36.776486   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:36.791630   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:36.791764   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:36.837475   66615 cri.go:89] found id: ""
	I0429 20:07:36.837503   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.837513   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:36.837521   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:36.837607   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.879902   66615 cri.go:89] found id: ""
	I0429 20:07:36.879936   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.879947   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:36.879954   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:36.880021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:36.918566   66615 cri.go:89] found id: ""
	I0429 20:07:36.918594   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.918608   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:36.918613   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:36.918659   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:36.958876   66615 cri.go:89] found id: ""
	I0429 20:07:36.958937   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.958948   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:36.958959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:36.959008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:36.998790   66615 cri.go:89] found id: ""
	I0429 20:07:36.998820   66615 logs.go:276] 0 containers: []
	W0429 20:07:36.998845   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:36.998864   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:36.998932   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:37.036933   66615 cri.go:89] found id: ""
	I0429 20:07:37.036962   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.036972   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:37.036979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:37.037024   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:37.076560   66615 cri.go:89] found id: ""
	I0429 20:07:37.076597   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.076609   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:37.076616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:37.076688   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:37.118324   66615 cri.go:89] found id: ""
	I0429 20:07:37.118351   66615 logs.go:276] 0 containers: []
	W0429 20:07:37.118360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:37.118368   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:37.118380   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:37.194671   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:37.194714   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:37.236269   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:37.236300   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:37.297006   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:37.297061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:37.312696   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:37.312723   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:37.387132   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:39.888111   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:39.903157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:39.903236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:39.945913   66615 cri.go:89] found id: ""
	I0429 20:07:39.945945   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.945956   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:39.945980   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:39.946076   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:36.056553   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:38.057230   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:39.050274   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:41.053502   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:38.756717   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:40.762297   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:39.986494   66615 cri.go:89] found id: ""
	I0429 20:07:39.986521   66615 logs.go:276] 0 containers: []
	W0429 20:07:39.986530   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:39.986538   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:39.986598   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:40.031481   66615 cri.go:89] found id: ""
	I0429 20:07:40.031520   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.031531   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:40.031539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:40.031604   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:40.076792   66615 cri.go:89] found id: ""
	I0429 20:07:40.076816   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.076824   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:40.076830   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:40.076877   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:40.121020   66615 cri.go:89] found id: ""
	I0429 20:07:40.121050   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.121061   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:40.121068   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:40.121134   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:40.173189   66615 cri.go:89] found id: ""
	I0429 20:07:40.173221   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.173233   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:40.173241   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:40.173303   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:40.220190   66615 cri.go:89] found id: ""
	I0429 20:07:40.220212   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.220223   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:40.220229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:40.220293   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:40.262552   66615 cri.go:89] found id: ""
	I0429 20:07:40.262579   66615 logs.go:276] 0 containers: []
	W0429 20:07:40.262588   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:40.262600   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:40.262616   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:40.322249   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:40.322289   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:40.338703   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:40.338734   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:40.431311   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:40.431333   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:40.431345   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:40.518410   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:40.518446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:43.062556   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:43.077757   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:43.077844   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:43.129247   66615 cri.go:89] found id: ""
	I0429 20:07:43.129277   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.129289   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:43.129296   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:43.129364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:43.173474   66615 cri.go:89] found id: ""
	I0429 20:07:43.173501   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.173509   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:43.173514   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:43.173566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:43.218788   66615 cri.go:89] found id: ""
	I0429 20:07:43.218812   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.218820   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:43.218825   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:43.218873   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:43.259269   66615 cri.go:89] found id: ""
	I0429 20:07:43.259289   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.259297   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:43.259302   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:43.259362   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:43.301152   66615 cri.go:89] found id: ""
	I0429 20:07:43.301180   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.301189   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:43.301195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:43.301244   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:43.338183   66615 cri.go:89] found id: ""
	I0429 20:07:43.338211   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.338222   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:43.338229   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:43.338276   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:43.376919   66615 cri.go:89] found id: ""
	I0429 20:07:43.376946   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.376958   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:43.376966   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:43.377032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:43.417421   66615 cri.go:89] found id: ""
	I0429 20:07:43.417450   66615 logs.go:276] 0 containers: []
	W0429 20:07:43.417457   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:43.417465   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:43.417478   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:43.470009   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:43.470040   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:43.486059   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:43.486109   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:43.561688   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:43.561709   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:43.561725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:43.649713   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:43.649750   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:40.555780   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.056758   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.552176   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:46.049393   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:43.256870   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:45.258520   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:47.757738   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:46.194996   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:46.210261   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:46.210342   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:46.249208   66615 cri.go:89] found id: ""
	I0429 20:07:46.249240   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.249253   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:46.249260   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:46.249336   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:46.287285   66615 cri.go:89] found id: ""
	I0429 20:07:46.287315   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.287328   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:46.287335   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:46.287397   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:46.327944   66615 cri.go:89] found id: ""
	I0429 20:07:46.327976   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.327988   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:46.327996   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:46.328061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:46.373875   66615 cri.go:89] found id: ""
	I0429 20:07:46.373899   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.373908   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:46.373914   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:46.373967   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:46.413748   66615 cri.go:89] found id: ""
	I0429 20:07:46.413774   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.413783   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:46.413789   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:46.413853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:46.459380   66615 cri.go:89] found id: ""
	I0429 20:07:46.459412   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.459424   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:46.459432   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:46.459496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:46.499833   66615 cri.go:89] found id: ""
	I0429 20:07:46.499861   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.499870   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:46.499876   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:46.499939   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:46.541025   66615 cri.go:89] found id: ""
	I0429 20:07:46.541055   66615 logs.go:276] 0 containers: []
	W0429 20:07:46.541068   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:46.541080   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:46.541096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:46.601187   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:46.601224   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:46.617399   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:46.617426   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:46.697076   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:46.697113   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:46.697129   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:46.783265   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:46.783303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.335795   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:49.350030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:49.350116   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:49.390278   66615 cri.go:89] found id: ""
	I0429 20:07:49.390315   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.390326   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:49.390333   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:49.390388   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:49.431145   66615 cri.go:89] found id: ""
	I0429 20:07:49.431175   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.431186   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:49.431193   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:49.431252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:49.473965   66615 cri.go:89] found id: ""
	I0429 20:07:49.473997   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.474014   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:49.474022   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:49.474105   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:49.515372   66615 cri.go:89] found id: ""
	I0429 20:07:49.515407   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.515419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:49.515427   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:49.515487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:49.552541   66615 cri.go:89] found id: ""
	I0429 20:07:49.552567   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.552576   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:49.552582   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:49.552650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:49.599628   66615 cri.go:89] found id: ""
	I0429 20:07:49.599660   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.599672   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:49.599680   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:49.599745   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:49.642705   66615 cri.go:89] found id: ""
	I0429 20:07:49.642741   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.642752   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:49.642759   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:49.642827   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:49.679864   66615 cri.go:89] found id: ""
	I0429 20:07:49.679888   66615 logs.go:276] 0 containers: []
	W0429 20:07:49.679896   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:49.679905   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:49.679919   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:49.765967   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:49.765986   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:49.766010   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:49.852739   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:49.852779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:49.905586   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:49.905613   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:45.559781   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:48.059952   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:48.049788   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:50.548836   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.551059   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:50.256898   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.757213   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:49.959443   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:49.959474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.476677   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:52.491378   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:52.491458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:52.535801   66615 cri.go:89] found id: ""
	I0429 20:07:52.535827   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.535835   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:52.535841   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:52.535901   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:52.582895   66615 cri.go:89] found id: ""
	I0429 20:07:52.582932   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.582944   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:52.582952   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:52.583022   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:52.627070   66615 cri.go:89] found id: ""
	I0429 20:07:52.627096   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.627113   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:52.627120   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:52.627181   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:52.673312   66615 cri.go:89] found id: ""
	I0429 20:07:52.673339   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.673348   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:52.673353   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:52.673399   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:52.713099   66615 cri.go:89] found id: ""
	I0429 20:07:52.713124   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.713131   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:52.713139   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:52.713205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:52.761982   66615 cri.go:89] found id: ""
	I0429 20:07:52.762007   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.762017   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:52.762024   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:52.762108   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:52.801019   66615 cri.go:89] found id: ""
	I0429 20:07:52.801048   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.801059   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:52.801067   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:52.801141   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:52.842544   66615 cri.go:89] found id: ""
	I0429 20:07:52.842578   66615 logs.go:276] 0 containers: []
	W0429 20:07:52.842602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:52.842613   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:52.842630   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:52.896409   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:52.896442   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:52.912625   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:52.912650   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:52.992231   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:52.992260   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:52.992276   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:53.077473   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:53.077507   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:50.555818   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:52.556860   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:54.557161   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:54.554094   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:57.049699   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:55.257406   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:57.257840   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:55.625557   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:55.640211   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:55.640284   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:55.683215   66615 cri.go:89] found id: ""
	I0429 20:07:55.683250   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.683259   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:55.683275   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:55.683341   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:55.730820   66615 cri.go:89] found id: ""
	I0429 20:07:55.730851   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.730862   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:55.730869   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:55.730928   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:55.771784   66615 cri.go:89] found id: ""
	I0429 20:07:55.771808   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.771816   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:55.771821   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:55.771866   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:55.814988   66615 cri.go:89] found id: ""
	I0429 20:07:55.815021   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.815034   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:55.815042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:55.815114   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:55.859293   66615 cri.go:89] found id: ""
	I0429 20:07:55.859327   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.859340   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:55.859349   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:55.859416   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:55.901802   66615 cri.go:89] found id: ""
	I0429 20:07:55.901833   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.901844   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:55.901852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:55.901921   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:55.943863   66615 cri.go:89] found id: ""
	I0429 20:07:55.943895   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.943905   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:55.943913   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:55.943977   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:55.986256   66615 cri.go:89] found id: ""
	I0429 20:07:55.986284   66615 logs.go:276] 0 containers: []
	W0429 20:07:55.986296   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:55.986314   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:55.986332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:56.036710   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:56.036742   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:56.099909   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:56.099945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:56.117630   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:56.117660   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:56.197396   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.197421   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:56.197436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:58.779065   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:07:58.794086   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:07:58.794168   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:07:58.844035   66615 cri.go:89] found id: ""
	I0429 20:07:58.844062   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.844070   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:07:58.844076   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:07:58.844133   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:07:58.887859   66615 cri.go:89] found id: ""
	I0429 20:07:58.887889   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.887900   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:07:58.887906   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:07:58.887991   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:07:58.929039   66615 cri.go:89] found id: ""
	I0429 20:07:58.929072   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.929083   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:07:58.929092   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:07:58.929152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:07:58.965930   66615 cri.go:89] found id: ""
	I0429 20:07:58.965975   66615 logs.go:276] 0 containers: []
	W0429 20:07:58.965983   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:07:58.965989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:07:58.966061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:07:59.005583   66615 cri.go:89] found id: ""
	I0429 20:07:59.005616   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.005628   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:07:59.005638   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:07:59.005697   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:07:59.047964   66615 cri.go:89] found id: ""
	I0429 20:07:59.047994   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.048007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:07:59.048014   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:07:59.048077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:07:59.091851   66615 cri.go:89] found id: ""
	I0429 20:07:59.091891   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.091904   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:07:59.091909   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:07:59.091978   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:07:59.134843   66615 cri.go:89] found id: ""
	I0429 20:07:59.134874   66615 logs.go:276] 0 containers: []
	W0429 20:07:59.134881   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:07:59.134890   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:07:59.134907   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:07:59.219048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:07:59.219084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:07:59.267404   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:07:59.267436   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:07:59.322264   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:07:59.322303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:07:59.339196   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:07:59.339235   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:07:59.441904   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:07:56.558660   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.057214   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.054473   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.550825   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:07:59.756683   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.759031   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:01.942998   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:01.957442   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:01.957502   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:02.002240   66615 cri.go:89] found id: ""
	I0429 20:08:02.002271   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.002283   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:02.002291   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:02.002353   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:02.051506   66615 cri.go:89] found id: ""
	I0429 20:08:02.051535   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.051546   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:02.051552   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:02.051611   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:02.093194   66615 cri.go:89] found id: ""
	I0429 20:08:02.093234   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.093247   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:02.093254   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:02.093317   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:02.134988   66615 cri.go:89] found id: ""
	I0429 20:08:02.135016   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.135027   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:02.135034   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:02.135099   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:02.182954   66615 cri.go:89] found id: ""
	I0429 20:08:02.182982   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.182993   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:02.183000   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:02.183063   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:02.227778   66615 cri.go:89] found id: ""
	I0429 20:08:02.227807   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.227817   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:02.227826   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:02.227888   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:02.265593   66615 cri.go:89] found id: ""
	I0429 20:08:02.265624   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.265634   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:02.265641   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:02.265701   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:02.306520   66615 cri.go:89] found id: ""
	I0429 20:08:02.306550   66615 logs.go:276] 0 containers: []
	W0429 20:08:02.306558   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:02.306566   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:02.306578   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:02.323806   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:02.323844   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:02.407110   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:02.407140   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:02.407153   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:02.493755   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:02.493791   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:02.538610   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:02.538640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:01.556084   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:03.556487   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:03.551788   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:05.553047   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:04.257831   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:06.756438   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:05.096630   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:05.111112   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:05.111173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:05.151237   66615 cri.go:89] found id: ""
	I0429 20:08:05.151268   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.151279   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:05.151286   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:05.151370   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:05.205344   66615 cri.go:89] found id: ""
	I0429 20:08:05.205379   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.205389   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:05.205396   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:05.205478   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:05.244394   66615 cri.go:89] found id: ""
	I0429 20:08:05.244426   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.244438   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:05.244445   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:05.244504   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:05.285320   66615 cri.go:89] found id: ""
	I0429 20:08:05.285343   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.285350   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:05.285356   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:05.285404   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:05.327618   66615 cri.go:89] found id: ""
	I0429 20:08:05.327645   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.327657   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:05.327664   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:05.327742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:05.369152   66615 cri.go:89] found id: ""
	I0429 20:08:05.369178   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.369194   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:05.369208   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:05.369277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:05.407206   66615 cri.go:89] found id: ""
	I0429 20:08:05.407234   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.407243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:05.407248   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:05.407299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:05.447404   66615 cri.go:89] found id: ""
	I0429 20:08:05.447438   66615 logs.go:276] 0 containers: []
	W0429 20:08:05.447449   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:05.447459   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:05.447475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:05.529660   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:05.529700   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.582510   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:05.582565   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:05.639300   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:05.639351   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:05.656825   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:05.656860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:05.730863   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.231635   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:08.247722   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:08.247811   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:08.298354   66615 cri.go:89] found id: ""
	I0429 20:08:08.298382   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.298395   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:08.298401   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:08.298459   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:08.339497   66615 cri.go:89] found id: ""
	I0429 20:08:08.339536   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.339549   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:08.339556   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:08.339609   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:08.379665   66615 cri.go:89] found id: ""
	I0429 20:08:08.379695   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.379705   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:08.379712   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:08.379786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:08.419698   66615 cri.go:89] found id: ""
	I0429 20:08:08.419722   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.419732   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:08.419739   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:08.419798   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:08.463901   66615 cri.go:89] found id: ""
	I0429 20:08:08.463935   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.463946   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:08.463953   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:08.464028   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:08.504568   66615 cri.go:89] found id: ""
	I0429 20:08:08.504603   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.504617   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:08.504626   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:08.504695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:08.545634   66615 cri.go:89] found id: ""
	I0429 20:08:08.545661   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.545671   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:08.545678   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:08.545741   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:08.586936   66615 cri.go:89] found id: ""
	I0429 20:08:08.586965   66615 logs.go:276] 0 containers: []
	W0429 20:08:08.586976   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:08.586987   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:08.587003   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:08.641755   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:08.641794   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:08.659798   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:08.659845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:08.744265   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:08.744288   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:08.744303   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:08.823813   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:08.823860   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:05.557172   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:07.558538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:10.057841   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:08.049902   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:10.050576   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:12.051331   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:08.757300   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:11.257697   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:11.375600   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:11.396286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:11.396351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:11.442737   66615 cri.go:89] found id: ""
	I0429 20:08:11.442781   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.442789   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:11.442797   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:11.442865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:11.484131   66615 cri.go:89] found id: ""
	I0429 20:08:11.484158   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.484167   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:11.484172   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:11.484231   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:11.526647   66615 cri.go:89] found id: ""
	I0429 20:08:11.526684   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.526695   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:11.526705   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:11.526777   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:11.572001   66615 cri.go:89] found id: ""
	I0429 20:08:11.572028   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.572036   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:11.572042   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:11.572100   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:11.618980   66615 cri.go:89] found id: ""
	I0429 20:08:11.619003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.619011   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:11.619016   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:11.619077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:11.667079   66615 cri.go:89] found id: ""
	I0429 20:08:11.667107   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.667115   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:11.667123   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:11.667198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:11.707967   66615 cri.go:89] found id: ""
	I0429 20:08:11.708003   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.708013   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:11.708020   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:11.708073   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:11.753024   66615 cri.go:89] found id: ""
	I0429 20:08:11.753053   66615 logs.go:276] 0 containers: []
	W0429 20:08:11.753062   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:11.753070   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:11.753081   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:11.820171   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:11.820210   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:11.852234   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:11.852263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:11.971060   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:11.971085   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:11.971097   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:12.049797   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:12.049845   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:14.601181   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:14.621413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:14.621496   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:14.677453   66615 cri.go:89] found id: ""
	I0429 20:08:14.677486   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.677498   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:14.677504   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:14.677562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:14.720517   66615 cri.go:89] found id: ""
	I0429 20:08:14.720548   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.720560   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:14.720571   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:14.720636   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:14.770186   66615 cri.go:89] found id: ""
	I0429 20:08:14.770211   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.770219   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:14.770225   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:14.770301   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:14.815286   66615 cri.go:89] found id: ""
	I0429 20:08:14.815310   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.815320   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:14.815327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:14.815389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:14.862625   66615 cri.go:89] found id: ""
	I0429 20:08:14.862651   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.862662   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:14.862669   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:14.862726   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:14.910517   66615 cri.go:89] found id: ""
	I0429 20:08:14.910554   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.910565   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:14.910572   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:14.910634   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:14.951085   66615 cri.go:89] found id: ""
	I0429 20:08:14.951110   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.951119   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:14.951124   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:14.951173   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:12.558191   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:15.056987   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:14.051423   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:16.051632   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:13.757001   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:16.257425   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:14.991414   66615 cri.go:89] found id: ""
	I0429 20:08:14.991443   66615 logs.go:276] 0 containers: []
	W0429 20:08:14.991455   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:14.991464   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:14.991476   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:15.047551   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:15.047583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:15.063667   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:15.063692   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:15.141744   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:15.141820   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:15.141841   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:15.225676   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:15.225722   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:17.774459   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:17.793137   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:17.793210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:17.856725   66615 cri.go:89] found id: ""
	I0429 20:08:17.856756   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.856767   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:17.856774   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:17.856835   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:17.916510   66615 cri.go:89] found id: ""
	I0429 20:08:17.916542   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.916554   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:17.916561   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:17.916646   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:17.970835   66615 cri.go:89] found id: ""
	I0429 20:08:17.970867   66615 logs.go:276] 0 containers: []
	W0429 20:08:17.970877   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:17.970884   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:17.970948   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:18.013324   66615 cri.go:89] found id: ""
	I0429 20:08:18.013353   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.013366   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:18.013384   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:18.013458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:18.062930   66615 cri.go:89] found id: ""
	I0429 20:08:18.062957   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.062968   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:18.062974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:18.063040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:18.111792   66615 cri.go:89] found id: ""
	I0429 20:08:18.111820   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.111829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:18.111834   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:18.111911   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:18.160096   66615 cri.go:89] found id: ""
	I0429 20:08:18.160121   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.160129   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:18.160135   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:18.160198   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:18.204012   66615 cri.go:89] found id: ""
	I0429 20:08:18.204044   66615 logs.go:276] 0 containers: []
	W0429 20:08:18.204052   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:18.204062   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:18.204074   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:18.284288   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:18.284337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:18.340746   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:18.340779   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:18.397612   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:18.397652   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:18.413425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:18.413455   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:18.493598   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:17.058215   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:19.556308   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:18.551175   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:20.551292   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:22.551637   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:18.757370   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:21.259192   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:20.994339   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:21.010199   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:21.010289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:21.052190   66615 cri.go:89] found id: ""
	I0429 20:08:21.052219   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.052230   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:21.052237   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:21.052300   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:21.090838   66615 cri.go:89] found id: ""
	I0429 20:08:21.090870   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.090882   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:21.090889   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:21.090953   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:21.137997   66615 cri.go:89] found id: ""
	I0429 20:08:21.138044   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.138056   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:21.138082   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:21.138171   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:21.176278   66615 cri.go:89] found id: ""
	I0429 20:08:21.176311   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.176323   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:21.176331   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:21.176390   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:21.213925   66615 cri.go:89] found id: ""
	I0429 20:08:21.213955   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.213966   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:21.213973   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:21.214039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:21.253815   66615 cri.go:89] found id: ""
	I0429 20:08:21.253842   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.253850   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:21.253857   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:21.253905   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:21.296521   66615 cri.go:89] found id: ""
	I0429 20:08:21.296553   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.296565   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:21.296573   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:21.296633   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:21.337114   66615 cri.go:89] found id: ""
	I0429 20:08:21.337143   66615 logs.go:276] 0 containers: []
	W0429 20:08:21.337150   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:21.337158   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:21.337177   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.384860   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:21.384901   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:21.443837   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:21.443899   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:21.460084   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:21.460116   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:21.541230   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:21.541262   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:21.541278   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.132057   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:24.148381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:24.148458   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:24.192469   66615 cri.go:89] found id: ""
	I0429 20:08:24.192499   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.192510   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:24.192516   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:24.192568   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:24.232150   66615 cri.go:89] found id: ""
	I0429 20:08:24.232177   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.232188   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:24.232195   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:24.232260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:24.272679   66615 cri.go:89] found id: ""
	I0429 20:08:24.272705   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.272714   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:24.272719   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:24.272772   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:24.317114   66615 cri.go:89] found id: ""
	I0429 20:08:24.317137   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.317145   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:24.317151   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:24.317200   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:24.362251   66615 cri.go:89] found id: ""
	I0429 20:08:24.362279   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.362287   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:24.362294   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:24.362346   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:24.405696   66615 cri.go:89] found id: ""
	I0429 20:08:24.405721   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.405729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:24.405734   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:24.405828   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:24.446837   66615 cri.go:89] found id: ""
	I0429 20:08:24.446864   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.446871   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:24.446878   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:24.446929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:24.493416   66615 cri.go:89] found id: ""
	I0429 20:08:24.493445   66615 logs.go:276] 0 containers: []
	W0429 20:08:24.493454   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:24.493462   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:24.493475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:24.555657   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:24.555693   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:24.572297   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:24.572328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:24.658463   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:24.658487   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:24.658499   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:24.752064   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:24.752103   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:21.557948   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:24.056339   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:25.050530   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:27.554744   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:23.758156   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:26.261403   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:27.303812   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:27.319304   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:27.319373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:27.360473   66615 cri.go:89] found id: ""
	I0429 20:08:27.360509   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.360521   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:27.360529   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:27.360595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:27.404619   66615 cri.go:89] found id: ""
	I0429 20:08:27.404651   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.404668   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:27.404675   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:27.404742   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:27.447464   66615 cri.go:89] found id: ""
	I0429 20:08:27.447490   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.447498   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:27.447503   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:27.447556   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:27.489197   66615 cri.go:89] found id: ""
	I0429 20:08:27.489235   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.489246   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:27.489253   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:27.489323   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:27.534354   66615 cri.go:89] found id: ""
	I0429 20:08:27.534387   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.534397   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:27.534404   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:27.534470   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:27.580721   66615 cri.go:89] found id: ""
	I0429 20:08:27.580751   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.580762   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:27.580769   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:27.580841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:27.620000   66615 cri.go:89] found id: ""
	I0429 20:08:27.620033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.620041   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:27.620046   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:27.620096   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:27.659000   66615 cri.go:89] found id: ""
	I0429 20:08:27.659033   66615 logs.go:276] 0 containers: []
	W0429 20:08:27.659041   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:27.659050   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:27.659062   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:27.739202   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:27.739241   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:27.784761   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:27.784807   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:27.842707   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:27.842748   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:27.859471   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:27.859498   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:27.942686   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:26.058098   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:28.059648   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.056692   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:32.550893   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:28.757412   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.759070   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:30.443410   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:30.460332   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:30.460417   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:30.497715   66615 cri.go:89] found id: ""
	I0429 20:08:30.497752   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.497764   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:30.497772   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:30.497841   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:30.539376   66615 cri.go:89] found id: ""
	I0429 20:08:30.539409   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.539419   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:30.539426   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:30.539492   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:30.587567   66615 cri.go:89] found id: ""
	I0429 20:08:30.587596   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.587606   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:30.587616   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:30.587679   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:30.626198   66615 cri.go:89] found id: ""
	I0429 20:08:30.626228   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.626238   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:30.626246   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:30.626313   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:30.665798   66615 cri.go:89] found id: ""
	I0429 20:08:30.665829   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.665837   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:30.665843   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:30.665909   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:30.708627   66615 cri.go:89] found id: ""
	I0429 20:08:30.708659   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.708671   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:30.708679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:30.708762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:30.754190   66615 cri.go:89] found id: ""
	I0429 20:08:30.754220   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.754230   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:30.754236   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:30.754295   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:30.797383   66615 cri.go:89] found id: ""
	I0429 20:08:30.797410   66615 logs.go:276] 0 containers: []
	W0429 20:08:30.797421   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:30.797432   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:30.797447   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.843485   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:30.843512   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:30.900081   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:30.900118   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:30.916095   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:30.916125   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:30.995509   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:30.995529   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:30.995541   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:33.584596   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:33.600969   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:33.601058   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:33.643935   66615 cri.go:89] found id: ""
	I0429 20:08:33.643967   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.643979   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:33.643986   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:33.644049   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:33.681047   66615 cri.go:89] found id: ""
	I0429 20:08:33.681077   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.681085   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:33.681091   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:33.681160   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:33.726450   66615 cri.go:89] found id: ""
	I0429 20:08:33.726479   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.726490   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:33.726501   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:33.726561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:33.765237   66615 cri.go:89] found id: ""
	I0429 20:08:33.765264   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.765275   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:33.765281   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:33.765339   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:33.808333   66615 cri.go:89] found id: ""
	I0429 20:08:33.808366   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.808376   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:33.808383   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:33.808446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:33.854991   66615 cri.go:89] found id: ""
	I0429 20:08:33.855023   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.855034   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:33.855041   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:33.855126   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:33.895405   66615 cri.go:89] found id: ""
	I0429 20:08:33.895434   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.895446   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:33.895455   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:33.895521   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:33.937265   66615 cri.go:89] found id: ""
	I0429 20:08:33.937289   66615 logs.go:276] 0 containers: []
	W0429 20:08:33.937297   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:33.937306   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:33.937324   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:33.991565   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:33.991594   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:34.006316   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:34.006343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:34.088734   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:34.088762   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:34.088776   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:34.180451   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:34.180489   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:30.557020   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:33.058354   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:35.049638   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:37.051464   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:33.256955   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:35.257122   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:37.257629   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:36.727080   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:36.743038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:36.743124   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:36.785441   66615 cri.go:89] found id: ""
	I0429 20:08:36.785465   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.785475   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:36.785482   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:36.785542   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:36.828787   66615 cri.go:89] found id: ""
	I0429 20:08:36.828819   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.828829   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:36.828836   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:36.828896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:36.867712   66615 cri.go:89] found id: ""
	I0429 20:08:36.867738   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.867749   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:36.867756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:36.867825   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:36.911435   66615 cri.go:89] found id: ""
	I0429 20:08:36.911462   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.911472   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:36.911478   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:36.911560   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:36.953803   66615 cri.go:89] found id: ""
	I0429 20:08:36.953828   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.953836   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:36.953842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:36.953903   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:36.990305   66615 cri.go:89] found id: ""
	I0429 20:08:36.990329   66615 logs.go:276] 0 containers: []
	W0429 20:08:36.990339   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:36.990347   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:36.990434   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:37.029177   66615 cri.go:89] found id: ""
	I0429 20:08:37.029206   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.029225   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:37.029232   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:37.029294   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:37.067583   66615 cri.go:89] found id: ""
	I0429 20:08:37.067605   66615 logs.go:276] 0 containers: []
	W0429 20:08:37.067612   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:37.067619   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:37.067631   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:37.144739   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:37.144776   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:37.144788   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:37.227724   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:37.227762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:37.270383   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:37.270417   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:37.326858   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:37.326890   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:39.843323   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:39.859899   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:39.859961   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:39.903125   66615 cri.go:89] found id: ""
	I0429 20:08:39.903155   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.903164   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:39.903169   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:39.903243   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:39.944271   66615 cri.go:89] found id: ""
	I0429 20:08:39.944300   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.944309   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:39.944314   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:39.944363   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:35.557115   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:38.056175   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.550339   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:42.048622   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.756355   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:42.255528   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:39.989934   66615 cri.go:89] found id: ""
	I0429 20:08:39.989964   66615 logs.go:276] 0 containers: []
	W0429 20:08:39.989972   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:39.989978   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:39.990032   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:40.025936   66615 cri.go:89] found id: ""
	I0429 20:08:40.025965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.025976   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:40.025983   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:40.026044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:40.065943   66615 cri.go:89] found id: ""
	I0429 20:08:40.065965   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.065976   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:40.065984   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:40.066038   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:40.109986   66615 cri.go:89] found id: ""
	I0429 20:08:40.110018   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.110030   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:40.110038   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:40.110115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:40.155610   66615 cri.go:89] found id: ""
	I0429 20:08:40.155716   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.155734   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:40.155745   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:40.155803   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:40.196213   66615 cri.go:89] found id: ""
	I0429 20:08:40.196239   66615 logs.go:276] 0 containers: []
	W0429 20:08:40.196246   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:40.196256   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:40.196272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:40.280330   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:40.280372   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:40.326774   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:40.326810   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:40.379438   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:40.379475   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:40.395332   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:40.395362   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:40.504413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:43.005046   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:43.020464   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:43.020544   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:43.066403   66615 cri.go:89] found id: ""
	I0429 20:08:43.066432   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.066444   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:43.066452   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:43.066548   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:43.109732   66615 cri.go:89] found id: ""
	I0429 20:08:43.109760   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.109771   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:43.109778   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:43.109850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:43.158457   66615 cri.go:89] found id: ""
	I0429 20:08:43.158483   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.158492   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:43.158498   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:43.158561   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:43.207170   66615 cri.go:89] found id: ""
	I0429 20:08:43.207201   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.207213   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:43.207221   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:43.207281   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:43.246746   66615 cri.go:89] found id: ""
	I0429 20:08:43.246783   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.246804   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:43.246811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:43.246875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:43.292786   66615 cri.go:89] found id: ""
	I0429 20:08:43.292813   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.292824   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:43.292831   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:43.292896   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:43.337509   66615 cri.go:89] found id: ""
	I0429 20:08:43.337537   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.337546   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:43.337551   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:43.337601   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:43.378446   66615 cri.go:89] found id: ""
	I0429 20:08:43.378473   66615 logs.go:276] 0 containers: []
	W0429 20:08:43.378481   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:43.378490   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:43.378502   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:43.460438   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:43.460474   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:43.503908   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:43.503945   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:43.561661   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:43.561699   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:43.577924   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:43.577954   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:43.667006   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:40.555875   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:43.057183   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:44.049342   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.049873   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:44.256458   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.256554   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:46.168175   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:46.212494   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:46.212579   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:46.251567   66615 cri.go:89] found id: ""
	I0429 20:08:46.251593   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.251603   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:46.251610   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:46.251673   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:46.291913   66615 cri.go:89] found id: ""
	I0429 20:08:46.291943   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.291955   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:46.291962   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:46.292023   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:46.331801   66615 cri.go:89] found id: ""
	I0429 20:08:46.331827   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.331836   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:46.331842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:46.331899   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:46.375956   66615 cri.go:89] found id: ""
	I0429 20:08:46.375989   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.376001   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:46.376008   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:46.376090   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:46.425572   66615 cri.go:89] found id: ""
	I0429 20:08:46.425599   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.425609   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:46.425618   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:46.425681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:46.468161   66615 cri.go:89] found id: ""
	I0429 20:08:46.468226   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.468249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:46.468263   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:46.468433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:46.512163   66615 cri.go:89] found id: ""
	I0429 20:08:46.512193   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.512205   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:46.512212   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:46.512277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:46.556047   66615 cri.go:89] found id: ""
	I0429 20:08:46.556078   66615 logs.go:276] 0 containers: []
	W0429 20:08:46.556088   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:46.556099   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:46.556111   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:46.609886   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:46.609921   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:46.625848   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:46.625878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:46.699005   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:46.699037   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:46.699053   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:46.783886   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:46.783923   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.331288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:49.344805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:49.344864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:49.381576   66615 cri.go:89] found id: ""
	I0429 20:08:49.381598   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.381605   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:49.381619   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:49.381667   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:49.418276   66615 cri.go:89] found id: ""
	I0429 20:08:49.418316   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.418329   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:49.418336   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:49.418389   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:49.460147   66615 cri.go:89] found id: ""
	I0429 20:08:49.460177   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.460188   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:49.460195   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:49.460253   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:49.500534   66615 cri.go:89] found id: ""
	I0429 20:08:49.500562   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.500569   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:49.500575   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:49.500632   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:49.538481   66615 cri.go:89] found id: ""
	I0429 20:08:49.538521   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.538534   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:49.538541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:49.538603   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:49.580192   66615 cri.go:89] found id: ""
	I0429 20:08:49.580218   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.580228   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:49.580234   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:49.580299   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:49.616400   66615 cri.go:89] found id: ""
	I0429 20:08:49.616427   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.616437   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:49.616444   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:49.616551   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:49.652871   66615 cri.go:89] found id: ""
	I0429 20:08:49.652900   66615 logs.go:276] 0 containers: []
	W0429 20:08:49.652918   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:49.652931   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:49.652947   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:49.728173   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:49.728200   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:49.728212   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:49.813701   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:49.813749   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:49.855685   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:49.855712   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:49.906480   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:49.906514   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:45.559939   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.056008   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.056054   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.052578   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.550638   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.550910   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:48.257460   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:50.259418   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.757365   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:52.422430   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:52.437412   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:52.437488   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:52.476896   66615 cri.go:89] found id: ""
	I0429 20:08:52.476919   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.476927   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:52.476932   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:52.476976   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:52.517266   66615 cri.go:89] found id: ""
	I0429 20:08:52.517298   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.517310   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:52.517318   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:52.517381   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:52.560886   66615 cri.go:89] found id: ""
	I0429 20:08:52.560909   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.560917   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:52.560922   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:52.560969   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:52.601362   66615 cri.go:89] found id: ""
	I0429 20:08:52.601398   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.601419   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:52.601429   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:52.601506   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:52.639544   66615 cri.go:89] found id: ""
	I0429 20:08:52.639580   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.639591   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:52.639599   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:52.639652   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:52.681088   66615 cri.go:89] found id: ""
	I0429 20:08:52.681120   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.681130   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:52.681138   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:52.681204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:52.721777   66615 cri.go:89] found id: ""
	I0429 20:08:52.721802   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.721820   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:52.721828   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:52.721900   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:52.762823   66615 cri.go:89] found id: ""
	I0429 20:08:52.762845   66615 logs.go:276] 0 containers: []
	W0429 20:08:52.762856   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:52.762863   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:52.762875   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:52.819291   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:52.819326   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:52.847120   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:52.847165   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:52.956274   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:52.956301   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:52.956317   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:53.041636   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:53.041676   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:52.056558   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:54.555745   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.051656   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:57.549668   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.257083   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:57.757855   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:55.592636   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:55.607372   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:55.607449   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:55.643959   66615 cri.go:89] found id: ""
	I0429 20:08:55.643991   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.644000   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:55.644005   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:55.644061   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:55.682272   66615 cri.go:89] found id: ""
	I0429 20:08:55.682304   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.682315   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:55.682323   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:55.682384   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:55.720157   66615 cri.go:89] found id: ""
	I0429 20:08:55.720189   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.720200   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:55.720207   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:55.720272   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:55.761748   66615 cri.go:89] found id: ""
	I0429 20:08:55.761773   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.761781   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:55.761786   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:55.761842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:55.802377   66615 cri.go:89] found id: ""
	I0429 20:08:55.802405   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.802416   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:55.802423   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:55.802494   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:55.838986   66615 cri.go:89] found id: ""
	I0429 20:08:55.839016   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.839024   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:55.839030   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:55.839077   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:55.874991   66615 cri.go:89] found id: ""
	I0429 20:08:55.875022   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.875032   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:55.875039   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:55.875106   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:55.913561   66615 cri.go:89] found id: ""
	I0429 20:08:55.913595   66615 logs.go:276] 0 containers: []
	W0429 20:08:55.913607   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:55.913618   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:55.913633   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:55.965355   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:55.965391   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:55.981222   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:55.981259   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:56.056656   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:56.056685   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:56.056701   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.135276   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:56.135309   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:58.682855   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:08:58.701679   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:08:58.701769   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:08:58.760807   66615 cri.go:89] found id: ""
	I0429 20:08:58.760828   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.760841   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:08:58.760858   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:08:58.760910   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:08:58.835167   66615 cri.go:89] found id: ""
	I0429 20:08:58.835204   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.835216   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:08:58.835223   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:08:58.835289   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:08:58.877367   66615 cri.go:89] found id: ""
	I0429 20:08:58.877398   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.877409   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:08:58.877417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:08:58.877483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:08:58.923726   66615 cri.go:89] found id: ""
	I0429 20:08:58.923751   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.923760   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:08:58.923766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:08:58.923817   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:08:58.967780   66615 cri.go:89] found id: ""
	I0429 20:08:58.967804   66615 logs.go:276] 0 containers: []
	W0429 20:08:58.967811   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:08:58.967816   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:08:58.967865   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:08:59.010646   66615 cri.go:89] found id: ""
	I0429 20:08:59.010682   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.010690   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:08:59.010697   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:08:59.010759   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:08:59.057380   66615 cri.go:89] found id: ""
	I0429 20:08:59.057408   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.057418   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:08:59.057426   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:08:59.057483   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:08:59.099669   66615 cri.go:89] found id: ""
	I0429 20:08:59.099698   66615 logs.go:276] 0 containers: []
	W0429 20:08:59.099706   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:08:59.099715   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:08:59.099731   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:08:59.146831   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:08:59.146861   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:08:59.204232   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:08:59.204274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:08:59.219799   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:08:59.219824   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:08:59.305438   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:08:59.305465   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:08:59.305481   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:08:56.555976   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:08:58.557892   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:00.049511   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:02.050709   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:00.256064   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:02.257053   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:01.885861   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:01.900746   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:01.900808   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:01.942174   66615 cri.go:89] found id: ""
	I0429 20:09:01.942210   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.942218   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:01.942224   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:01.942285   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:01.986463   66615 cri.go:89] found id: ""
	I0429 20:09:01.986491   66615 logs.go:276] 0 containers: []
	W0429 20:09:01.986502   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:01.986509   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:01.986570   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:02.026290   66615 cri.go:89] found id: ""
	I0429 20:09:02.026314   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.026321   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:02.026327   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:02.026375   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:02.064239   66615 cri.go:89] found id: ""
	I0429 20:09:02.064259   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.064266   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:02.064271   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:02.064321   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:02.105807   66615 cri.go:89] found id: ""
	I0429 20:09:02.105838   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.105857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:02.105866   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:02.105926   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:02.144939   66615 cri.go:89] found id: ""
	I0429 20:09:02.144962   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.144970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:02.144975   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:02.145037   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:02.192866   66615 cri.go:89] found id: ""
	I0429 20:09:02.192891   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.192899   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:02.192905   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:02.192955   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:02.232485   66615 cri.go:89] found id: ""
	I0429 20:09:02.232515   66615 logs.go:276] 0 containers: []
	W0429 20:09:02.232524   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:02.232533   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:02.232550   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:02.287374   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:02.287402   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:02.302979   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:02.303009   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:02.380693   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:02.380713   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:02.380725   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:02.467048   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:02.467084   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:01.055311   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:03.055538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:05.056325   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:04.051014   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:06.556497   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:04.758329   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:07.256328   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:05.018176   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:05.033178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:05.033238   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:05.079008   66615 cri.go:89] found id: ""
	I0429 20:09:05.079034   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.079043   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:05.079050   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:05.079113   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:05.118620   66615 cri.go:89] found id: ""
	I0429 20:09:05.118642   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.118650   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:05.118655   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:05.118714   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:05.159603   66615 cri.go:89] found id: ""
	I0429 20:09:05.159646   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.159660   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:05.159666   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:05.159733   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:05.200224   66615 cri.go:89] found id: ""
	I0429 20:09:05.200252   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.200262   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:05.200270   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:05.200344   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:05.246341   66615 cri.go:89] found id: ""
	I0429 20:09:05.246384   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.246396   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:05.246403   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:05.246471   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:05.286126   66615 cri.go:89] found id: ""
	I0429 20:09:05.286153   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.286163   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:05.286171   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:05.286235   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:05.326911   66615 cri.go:89] found id: ""
	I0429 20:09:05.326941   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.326952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:05.326958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:05.327019   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:05.365564   66615 cri.go:89] found id: ""
	I0429 20:09:05.365592   66615 logs.go:276] 0 containers: []
	W0429 20:09:05.365602   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:05.365621   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:05.365637   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:05.445857   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:05.445877   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:05.445889   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:05.530129   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:05.530164   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:05.573936   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:05.573971   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:05.631263   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:05.631299   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.147288   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:08.162949   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:08.163021   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:08.203009   66615 cri.go:89] found id: ""
	I0429 20:09:08.203033   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.203041   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:08.203047   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:08.203112   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:08.241708   66615 cri.go:89] found id: ""
	I0429 20:09:08.241735   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.241744   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:08.241750   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:08.241801   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:08.283976   66615 cri.go:89] found id: ""
	I0429 20:09:08.284005   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.284017   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:08.284023   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:08.284091   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:08.323909   66615 cri.go:89] found id: ""
	I0429 20:09:08.323939   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.323951   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:08.323962   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:08.324031   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:08.363236   66615 cri.go:89] found id: ""
	I0429 20:09:08.363263   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.363271   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:08.363276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:08.363328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:08.401767   66615 cri.go:89] found id: ""
	I0429 20:09:08.401790   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.401798   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:08.401803   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:08.401851   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:08.443678   66615 cri.go:89] found id: ""
	I0429 20:09:08.443709   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.443726   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:08.443731   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:08.443791   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:08.489025   66615 cri.go:89] found id: ""
	I0429 20:09:08.489069   66615 logs.go:276] 0 containers: []
	W0429 20:09:08.489103   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:08.489129   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:08.489163   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:08.543421   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:08.543462   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:08.560425   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:08.560459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:08.642819   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:08.642840   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:08.642855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:08.726644   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:08.726682   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:07.555523   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.556138   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.049664   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.050246   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:09.256452   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.257458   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:11.277817   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:11.292340   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:11.292420   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:11.330721   66615 cri.go:89] found id: ""
	I0429 20:09:11.330756   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.330768   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:11.330776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:11.330850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:11.372057   66615 cri.go:89] found id: ""
	I0429 20:09:11.372089   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.372098   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:11.372103   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:11.372155   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:11.414786   66615 cri.go:89] found id: ""
	I0429 20:09:11.414814   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.414825   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:11.414832   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:11.414898   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:11.454934   66615 cri.go:89] found id: ""
	I0429 20:09:11.454961   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.454969   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:11.454974   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:11.455039   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:11.494169   66615 cri.go:89] found id: ""
	I0429 20:09:11.494200   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.494211   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:11.494217   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:11.494277   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:11.541646   66615 cri.go:89] found id: ""
	I0429 20:09:11.541684   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.541694   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:11.541701   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:11.541766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:11.584025   66615 cri.go:89] found id: ""
	I0429 20:09:11.584055   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.584067   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:11.584075   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:11.584138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:11.622425   66615 cri.go:89] found id: ""
	I0429 20:09:11.622459   66615 logs.go:276] 0 containers: []
	W0429 20:09:11.622471   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:11.622481   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:11.622493   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:11.676416   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:11.676450   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:11.693793   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:11.693822   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:11.771410   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:11.771437   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:11.771454   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:11.854969   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:11.855047   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:14.398871   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:14.415894   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:14.415983   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:14.454718   66615 cri.go:89] found id: ""
	I0429 20:09:14.454752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.454763   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:14.454773   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:14.454836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:14.498562   66615 cri.go:89] found id: ""
	I0429 20:09:14.498591   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.498602   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:14.498609   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:14.498669   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:14.536357   66615 cri.go:89] found id: ""
	I0429 20:09:14.536384   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.536395   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:14.536402   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:14.536460   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:14.577240   66615 cri.go:89] found id: ""
	I0429 20:09:14.577274   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.577284   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:14.577291   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:14.577372   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:14.617231   66615 cri.go:89] found id: ""
	I0429 20:09:14.617266   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.617279   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:14.617287   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:14.617355   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:14.659053   66615 cri.go:89] found id: ""
	I0429 20:09:14.659081   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.659090   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:14.659096   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:14.659145   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:14.708723   66615 cri.go:89] found id: ""
	I0429 20:09:14.708752   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.708760   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:14.708766   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:14.708814   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:14.753732   66615 cri.go:89] found id: ""
	I0429 20:09:14.753762   66615 logs.go:276] 0 containers: []
	W0429 20:09:14.753773   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:14.753783   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:14.753798   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:14.771952   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:14.771985   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:14.842649   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:14.842680   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:14.842696   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:14.925565   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:14.925603   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:11.556903   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:14.057196   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:13.550999   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:16.054439   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:13.257735   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:15.756651   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:17.756760   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:14.975731   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:14.975765   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.528872   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:17.544373   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:17.544455   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:17.582977   66615 cri.go:89] found id: ""
	I0429 20:09:17.583001   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.583009   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:17.583014   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:17.583079   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:17.620322   66615 cri.go:89] found id: ""
	I0429 20:09:17.620352   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.620368   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:17.620373   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:17.620421   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:17.664339   66615 cri.go:89] found id: ""
	I0429 20:09:17.664367   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.664375   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:17.664381   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:17.664433   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:17.705150   66615 cri.go:89] found id: ""
	I0429 20:09:17.705175   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.705184   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:17.705189   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:17.705239   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:17.749713   66615 cri.go:89] found id: ""
	I0429 20:09:17.749738   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.749747   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:17.749752   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:17.749850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:17.791528   66615 cri.go:89] found id: ""
	I0429 20:09:17.791552   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.791560   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:17.791566   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:17.791615   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:17.834994   66615 cri.go:89] found id: ""
	I0429 20:09:17.835024   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.835035   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:17.835050   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:17.835107   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:17.872194   66615 cri.go:89] found id: ""
	I0429 20:09:17.872226   66615 logs.go:276] 0 containers: []
	W0429 20:09:17.872236   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:17.872248   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:17.872263   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:17.926899   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:17.926936   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:17.944184   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:17.944218   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:18.029224   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:18.029246   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:18.029258   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:18.111112   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:18.111147   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:16.557282   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:19.056682   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:18.549106   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:20.550026   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:19.758897   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:22.257104   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:20.655965   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:20.671420   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:20.671487   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:20.710100   66615 cri.go:89] found id: ""
	I0429 20:09:20.710132   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.710144   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:20.710151   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:20.710221   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:20.748849   66615 cri.go:89] found id: ""
	I0429 20:09:20.748877   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.748888   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:20.748894   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:20.748956   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:20.788113   66615 cri.go:89] found id: ""
	I0429 20:09:20.788140   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.788151   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:20.788157   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:20.788217   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:20.831432   66615 cri.go:89] found id: ""
	I0429 20:09:20.831455   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.831462   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:20.831470   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:20.831518   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:20.878156   66615 cri.go:89] found id: ""
	I0429 20:09:20.878183   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.878191   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:20.878197   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:20.878262   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:20.920691   66615 cri.go:89] found id: ""
	I0429 20:09:20.920718   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.920729   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:20.920735   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:20.920795   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:20.960674   66615 cri.go:89] found id: ""
	I0429 20:09:20.960709   66615 logs.go:276] 0 containers: []
	W0429 20:09:20.960719   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:20.960726   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:20.960786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:21.006462   66615 cri.go:89] found id: ""
	I0429 20:09:21.006486   66615 logs.go:276] 0 containers: []
	W0429 20:09:21.006495   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:21.006503   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:21.006518   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.060040   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:21.060076   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:21.077141   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:21.077171   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:21.157058   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:21.157083   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:21.157096   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:21.265626   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:21.265662   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:23.813718   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:23.828338   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:23.828400   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:23.868730   66615 cri.go:89] found id: ""
	I0429 20:09:23.868760   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.868771   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:23.868776   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:23.868842   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:23.907919   66615 cri.go:89] found id: ""
	I0429 20:09:23.907941   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.907949   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:23.907956   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:23.908011   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:23.956769   66615 cri.go:89] found id: ""
	I0429 20:09:23.956794   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.956805   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:23.956811   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:23.956875   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:23.998578   66615 cri.go:89] found id: ""
	I0429 20:09:23.998612   66615 logs.go:276] 0 containers: []
	W0429 20:09:23.998621   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:23.998628   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:23.998681   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:24.037458   66615 cri.go:89] found id: ""
	I0429 20:09:24.037485   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.037492   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:24.037499   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:24.037562   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:24.078305   66615 cri.go:89] found id: ""
	I0429 20:09:24.078336   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.078351   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:24.078358   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:24.078418   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:24.120100   66615 cri.go:89] found id: ""
	I0429 20:09:24.120129   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.120139   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:24.120147   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:24.120211   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:24.160953   66615 cri.go:89] found id: ""
	I0429 20:09:24.160988   66615 logs.go:276] 0 containers: []
	W0429 20:09:24.161000   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:24.161012   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:24.161029   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:24.176654   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:24.176686   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:24.256631   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:24.256652   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:24.256668   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:24.335379   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:24.335424   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:24.379616   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:24.379649   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:21.556726   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:24.057483   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:23.050004   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:25.550882   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:27.551051   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:24.257726   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:26.757098   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:26.937283   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:26.956185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:26.956252   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:26.997000   66615 cri.go:89] found id: ""
	I0429 20:09:26.997034   66615 logs.go:276] 0 containers: []
	W0429 20:09:26.997046   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:26.997053   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:26.997115   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:27.042494   66615 cri.go:89] found id: ""
	I0429 20:09:27.042527   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.042538   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:27.042546   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:27.042608   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:27.086170   66615 cri.go:89] found id: ""
	I0429 20:09:27.086199   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.086211   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:27.086218   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:27.086282   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:27.126502   66615 cri.go:89] found id: ""
	I0429 20:09:27.126531   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.126542   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:27.126560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:27.126635   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:27.175102   66615 cri.go:89] found id: ""
	I0429 20:09:27.175134   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.175142   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:27.175148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:27.175216   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:27.215983   66615 cri.go:89] found id: ""
	I0429 20:09:27.216013   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.216025   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:27.216033   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:27.216097   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:27.256427   66615 cri.go:89] found id: ""
	I0429 20:09:27.256456   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.256467   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:27.256474   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:27.256540   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:27.298444   66615 cri.go:89] found id: ""
	I0429 20:09:27.298479   66615 logs.go:276] 0 containers: []
	W0429 20:09:27.298490   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:27.298501   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:27.298517   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:27.381579   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:27.381625   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:27.429304   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:27.429350   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:27.483044   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:27.483082   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:27.500304   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:27.500332   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:27.583909   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:26.555285   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:28.560544   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:30.049769   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:32.050537   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:29.256689   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:31.257554   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:30.084904   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:30.102417   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:30.102486   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:30.146726   66615 cri.go:89] found id: ""
	I0429 20:09:30.146748   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.146755   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:30.146761   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:30.146809   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:30.190739   66615 cri.go:89] found id: ""
	I0429 20:09:30.190768   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.190780   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:30.190788   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:30.190853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:30.228836   66615 cri.go:89] found id: ""
	I0429 20:09:30.228864   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.228879   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:30.228887   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:30.228951   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:30.270876   66615 cri.go:89] found id: ""
	I0429 20:09:30.270912   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.270920   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:30.270925   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:30.270995   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:30.310762   66615 cri.go:89] found id: ""
	I0429 20:09:30.310787   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.310795   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:30.310801   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:30.310850   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:30.356339   66615 cri.go:89] found id: ""
	I0429 20:09:30.356363   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.356371   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:30.356376   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:30.356430   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:30.395540   66615 cri.go:89] found id: ""
	I0429 20:09:30.395575   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.395589   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:30.395598   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:30.395671   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:30.446237   66615 cri.go:89] found id: ""
	I0429 20:09:30.446263   66615 logs.go:276] 0 containers: []
	W0429 20:09:30.446276   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:30.446286   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:30.446301   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:30.537309   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:30.537334   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:30.537349   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:30.629116   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:30.629151   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:30.683308   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:30.683337   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:30.735879   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:30.735910   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.252322   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:33.268276   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:33.268351   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:33.309531   66615 cri.go:89] found id: ""
	I0429 20:09:33.309622   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.309641   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:33.309650   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:33.309719   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:33.367480   66615 cri.go:89] found id: ""
	I0429 20:09:33.367515   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.367527   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:33.367535   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:33.367595   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:33.433717   66615 cri.go:89] found id: ""
	I0429 20:09:33.433742   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.433751   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:33.433756   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:33.433820   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:33.484053   66615 cri.go:89] found id: ""
	I0429 20:09:33.484081   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.484093   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:33.484100   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:33.484165   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:33.524103   66615 cri.go:89] found id: ""
	I0429 20:09:33.524126   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.524136   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:33.524143   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:33.524204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:33.565692   66615 cri.go:89] found id: ""
	I0429 20:09:33.565711   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.565719   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:33.565724   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:33.565784   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:33.607119   66615 cri.go:89] found id: ""
	I0429 20:09:33.607143   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.607153   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:33.607160   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:33.607225   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:33.648407   66615 cri.go:89] found id: ""
	I0429 20:09:33.648432   66615 logs.go:276] 0 containers: []
	W0429 20:09:33.648440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:33.648449   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:33.648463   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:33.730744   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:33.730781   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:33.774295   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:33.774328   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:33.829609   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:33.829653   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:33.846048   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:33.846092   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:33.924413   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:31.056307   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:33.056538   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:34.548872   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.550765   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:33.758571   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.257361   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:36.425072   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:36.440185   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:36.440268   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:36.484364   66615 cri.go:89] found id: ""
	I0429 20:09:36.484386   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.484394   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:36.484400   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:36.484450   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:36.520436   66615 cri.go:89] found id: ""
	I0429 20:09:36.520466   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.520478   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:36.520487   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:36.520549   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:36.563597   66615 cri.go:89] found id: ""
	I0429 20:09:36.563622   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.563630   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:36.563635   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:36.563704   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:36.613106   66615 cri.go:89] found id: ""
	I0429 20:09:36.613134   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.613143   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:36.613148   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:36.613204   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:36.658127   66615 cri.go:89] found id: ""
	I0429 20:09:36.658151   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.658159   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:36.658166   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:36.658229   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:36.707388   66615 cri.go:89] found id: ""
	I0429 20:09:36.707415   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.707423   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:36.707430   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:36.707479   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:36.753363   66615 cri.go:89] found id: ""
	I0429 20:09:36.753394   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.753405   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:36.753413   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:36.753475   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:36.801492   66615 cri.go:89] found id: ""
	I0429 20:09:36.801513   66615 logs.go:276] 0 containers: []
	W0429 20:09:36.801521   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:36.801530   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:36.801542   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:36.857055   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:36.857108   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:36.874567   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:36.874595   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:36.956176   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:36.956202   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:36.956217   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:37.039958   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:37.039997   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:39.591442   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:39.607842   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:39.607927   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:39.651917   66615 cri.go:89] found id: ""
	I0429 20:09:39.651941   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.651948   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:39.651955   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:39.652020   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:39.690032   66615 cri.go:89] found id: ""
	I0429 20:09:39.690059   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.690078   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:39.690086   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:39.690152   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:39.733176   66615 cri.go:89] found id: ""
	I0429 20:09:39.733200   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.733209   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:39.733215   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:39.733261   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:39.779528   66615 cri.go:89] found id: ""
	I0429 20:09:39.779560   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.779572   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:39.779581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:39.779650   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:39.822408   66615 cri.go:89] found id: ""
	I0429 20:09:39.822436   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.822445   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:39.822452   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:39.822522   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:39.864895   66615 cri.go:89] found id: ""
	I0429 20:09:39.864922   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.864930   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:39.864938   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:39.865008   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:39.907498   66615 cri.go:89] found id: ""
	I0429 20:09:39.907523   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.907533   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:39.907539   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:39.907606   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:39.948400   66615 cri.go:89] found id: ""
	I0429 20:09:39.948430   66615 logs.go:276] 0 containers: []
	W0429 20:09:39.948440   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:39.948449   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:39.948465   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:35.557262   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:38.056877   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:40.058568   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:39.049938   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:41.050139   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:38.756883   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:41.256775   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:39.964733   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:39.964763   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:40.043568   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:40.043593   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:40.043609   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:40.130776   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:40.130815   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:40.182011   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:40.182042   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:42.739068   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:42.756144   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:42.756286   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:42.798776   66615 cri.go:89] found id: ""
	I0429 20:09:42.798801   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.798810   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:42.798815   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:42.798861   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:42.837122   66615 cri.go:89] found id: ""
	I0429 20:09:42.837146   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.837154   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:42.837159   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:42.837205   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:42.875435   66615 cri.go:89] found id: ""
	I0429 20:09:42.875461   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.875471   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:42.875479   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:42.875536   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:42.920044   66615 cri.go:89] found id: ""
	I0429 20:09:42.920076   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.920087   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:42.920094   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:42.920175   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:42.960122   66615 cri.go:89] found id: ""
	I0429 20:09:42.960152   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.960163   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:42.960169   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:42.960215   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:42.999784   66615 cri.go:89] found id: ""
	I0429 20:09:42.999811   66615 logs.go:276] 0 containers: []
	W0429 20:09:42.999829   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:42.999837   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:42.999917   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:43.040882   66615 cri.go:89] found id: ""
	I0429 20:09:43.040930   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.040952   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:43.040959   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:43.041044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:43.082596   66615 cri.go:89] found id: ""
	I0429 20:09:43.082627   66615 logs.go:276] 0 containers: []
	W0429 20:09:43.082639   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:43.082650   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:43.082672   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:43.140302   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:43.140343   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:43.157508   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:43.157547   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:43.241025   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:43.241047   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:43.241061   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:43.325820   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:43.325855   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:42.058727   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:44.556415   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:43.051020   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.550017   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:43.258400   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.756441   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:47.757029   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:45.871561   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:45.887323   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:45.887398   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:45.930021   66615 cri.go:89] found id: ""
	I0429 20:09:45.930050   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.930062   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:45.930088   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:45.930148   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:45.971404   66615 cri.go:89] found id: ""
	I0429 20:09:45.971434   66615 logs.go:276] 0 containers: []
	W0429 20:09:45.971445   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:45.971452   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:45.971513   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:46.018801   66615 cri.go:89] found id: ""
	I0429 20:09:46.018825   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.018833   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:46.018838   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:46.018886   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:46.065118   66615 cri.go:89] found id: ""
	I0429 20:09:46.065140   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.065148   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:46.065153   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:46.065201   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:46.105244   66615 cri.go:89] found id: ""
	I0429 20:09:46.105271   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.105294   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:46.105309   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:46.105373   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:46.153736   66615 cri.go:89] found id: ""
	I0429 20:09:46.153759   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.153768   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:46.153773   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:46.153836   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:46.198940   66615 cri.go:89] found id: ""
	I0429 20:09:46.198965   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.198973   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:46.198979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:46.199064   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:46.238001   66615 cri.go:89] found id: ""
	I0429 20:09:46.238031   66615 logs.go:276] 0 containers: []
	W0429 20:09:46.238044   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:46.238056   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:46.238087   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:46.292309   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:46.292357   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:46.307243   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:46.307274   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:46.386832   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.386852   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:46.386869   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:46.468856   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:46.468891   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.017354   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:49.032753   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:49.032832   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:49.075345   66615 cri.go:89] found id: ""
	I0429 20:09:49.075375   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.075388   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:49.075394   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:49.075447   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:49.115294   66615 cri.go:89] found id: ""
	I0429 20:09:49.115328   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.115339   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:49.115347   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:49.115412   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:49.164115   66615 cri.go:89] found id: ""
	I0429 20:09:49.164140   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.164148   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:49.164154   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:49.164210   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:49.207643   66615 cri.go:89] found id: ""
	I0429 20:09:49.207668   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.207679   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:49.207698   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:49.207762   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:49.247121   66615 cri.go:89] found id: ""
	I0429 20:09:49.247147   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.247156   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:49.247162   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:49.247220   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:49.288594   66615 cri.go:89] found id: ""
	I0429 20:09:49.288626   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.288636   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:49.288643   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:49.288711   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:49.330243   66615 cri.go:89] found id: ""
	I0429 20:09:49.330273   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.330290   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:49.330300   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:49.330365   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:49.371304   66615 cri.go:89] found id: ""
	I0429 20:09:49.371348   66615 logs.go:276] 0 containers: []
	W0429 20:09:49.371360   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:49.371372   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:49.371392   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:49.450910   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:49.450949   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:49.494940   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:49.494970   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:49.553320   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:49.553364   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:49.568850   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:49.568878   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:49.644932   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:46.559246   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:49.056790   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:48.050285   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:50.050579   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.549882   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:49.757113   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.258680   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:52.145702   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:52.162681   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:52.162756   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:52.204816   66615 cri.go:89] found id: ""
	I0429 20:09:52.204858   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.204870   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:52.204888   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:52.204963   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:52.248481   66615 cri.go:89] found id: ""
	I0429 20:09:52.248510   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.248519   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:52.248525   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:52.248596   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:52.289158   66615 cri.go:89] found id: ""
	I0429 20:09:52.289186   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.289194   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:52.289200   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:52.289260   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:52.329905   66615 cri.go:89] found id: ""
	I0429 20:09:52.329931   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.329942   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:52.329950   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:52.330025   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:52.372523   66615 cri.go:89] found id: ""
	I0429 20:09:52.372546   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.372554   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:52.372560   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:52.372623   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:52.414936   66615 cri.go:89] found id: ""
	I0429 20:09:52.414970   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.414982   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:52.414989   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:52.415056   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:52.454139   66615 cri.go:89] found id: ""
	I0429 20:09:52.454164   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.454172   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:52.454178   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:52.454236   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:52.494093   66615 cri.go:89] found id: ""
	I0429 20:09:52.494129   66615 logs.go:276] 0 containers: []
	W0429 20:09:52.494142   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:52.494155   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:52.494195   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:52.552104   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:52.552142   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:52.568430   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:52.568459   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:52.649708   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:52.649736   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:52.649752   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:52.746231   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:52.746272   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:51.057536   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:53.556862   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:55.049835   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:57.050606   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:54.759308   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:57.256396   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:55.296228   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:55.311257   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:55.311328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:55.352071   66615 cri.go:89] found id: ""
	I0429 20:09:55.352098   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.352109   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:55.352116   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:55.352177   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:55.399806   66615 cri.go:89] found id: ""
	I0429 20:09:55.399837   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.399847   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:55.399860   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:55.399947   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:55.444372   66615 cri.go:89] found id: ""
	I0429 20:09:55.444398   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.444406   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:55.444411   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:55.444468   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:55.485542   66615 cri.go:89] found id: ""
	I0429 20:09:55.485568   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.485579   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:55.485586   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:55.485670   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:55.535452   66615 cri.go:89] found id: ""
	I0429 20:09:55.535483   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.535494   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:55.535502   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:55.535566   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:55.578009   66615 cri.go:89] found id: ""
	I0429 20:09:55.578036   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.578048   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:55.578056   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:55.578138   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:55.618302   66615 cri.go:89] found id: ""
	I0429 20:09:55.618336   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.618347   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:55.618355   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:55.618419   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:55.660489   66615 cri.go:89] found id: ""
	I0429 20:09:55.660518   66615 logs.go:276] 0 containers: []
	W0429 20:09:55.660526   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:55.660535   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:55.660548   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:55.713953   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:55.713993   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:55.729624   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:55.729656   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:55.813718   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:55.813746   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:55.813762   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:55.898805   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:55.898849   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:58.467014   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:09:58.482852   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:09:58.482925   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:09:58.522862   66615 cri.go:89] found id: ""
	I0429 20:09:58.522896   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.522908   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:09:58.522916   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:09:58.523000   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:09:58.568234   66615 cri.go:89] found id: ""
	I0429 20:09:58.568259   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.568266   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:09:58.568272   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:09:58.568327   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:09:58.609147   66615 cri.go:89] found id: ""
	I0429 20:09:58.609175   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.609185   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:09:58.609192   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:09:58.609265   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:09:58.657074   66615 cri.go:89] found id: ""
	I0429 20:09:58.657104   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.657115   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:09:58.657122   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:09:58.657186   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:09:58.706819   66615 cri.go:89] found id: ""
	I0429 20:09:58.706846   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.706857   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:09:58.706865   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:09:58.706929   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:09:58.754967   66615 cri.go:89] found id: ""
	I0429 20:09:58.754998   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.755007   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:09:58.755018   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:09:58.755078   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:09:58.793657   66615 cri.go:89] found id: ""
	I0429 20:09:58.793694   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.793704   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:09:58.793709   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:09:58.793766   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:09:58.832023   66615 cri.go:89] found id: ""
	I0429 20:09:58.832055   66615 logs.go:276] 0 containers: []
	W0429 20:09:58.832066   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:09:58.832078   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:09:58.832094   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:09:58.886568   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:09:58.886605   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:09:58.902126   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:09:58.902154   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:09:58.986786   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:09:58.986814   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:09:58.986831   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:09:59.072258   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:09:59.072296   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:09:55.557245   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:58.056570   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:59.549825   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:02.050651   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:09:59.756493   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:01.756935   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:01.620172   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:01.636958   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:01.637055   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:01.703865   66615 cri.go:89] found id: ""
	I0429 20:10:01.703890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.703899   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:01.703905   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:01.703950   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:01.742655   66615 cri.go:89] found id: ""
	I0429 20:10:01.742684   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.742692   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:01.742707   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:01.742778   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:01.782866   66615 cri.go:89] found id: ""
	I0429 20:10:01.782890   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.782901   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:01.782908   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:01.782964   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:01.822958   66615 cri.go:89] found id: ""
	I0429 20:10:01.822984   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.822992   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:01.822997   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:01.823044   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:01.868581   66615 cri.go:89] found id: ""
	I0429 20:10:01.868604   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.868612   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:01.868622   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:01.868675   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:01.908216   66615 cri.go:89] found id: ""
	I0429 20:10:01.908241   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.908249   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:01.908255   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:01.908328   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:01.953100   66615 cri.go:89] found id: ""
	I0429 20:10:01.953131   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.953142   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:01.953150   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:01.953213   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:01.999940   66615 cri.go:89] found id: ""
	I0429 20:10:01.999974   66615 logs.go:276] 0 containers: []
	W0429 20:10:01.999988   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:01.999999   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:02.000012   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:02.061669   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:02.061704   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:02.077609   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:02.077640   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:02.169643   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:02.169666   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:02.169679   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:02.250615   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:02.250657   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:04.803629   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:04.819286   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:04.819364   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:04.860501   66615 cri.go:89] found id: ""
	I0429 20:10:04.860530   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.860541   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:04.860548   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:04.860672   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:04.898444   66615 cri.go:89] found id: ""
	I0429 20:10:04.898472   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.898480   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:04.898486   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:04.898546   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:04.936569   66615 cri.go:89] found id: ""
	I0429 20:10:04.936599   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.936609   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:04.936617   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:04.936695   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:00.556325   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:02.557754   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:05.058245   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:04.551711   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:07.050327   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:03.757096   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:06.257529   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:04.979667   66615 cri.go:89] found id: ""
	I0429 20:10:04.979696   66615 logs.go:276] 0 containers: []
	W0429 20:10:04.979708   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:04.979715   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:04.979768   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:05.019608   66615 cri.go:89] found id: ""
	I0429 20:10:05.019638   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.019650   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:05.019658   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:05.019724   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:05.063723   66615 cri.go:89] found id: ""
	I0429 20:10:05.063749   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.063758   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:05.063765   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:05.063821   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:05.106676   66615 cri.go:89] found id: ""
	I0429 20:10:05.106704   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.106714   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:05.106721   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:05.106783   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:05.147652   66615 cri.go:89] found id: ""
	I0429 20:10:05.147683   66615 logs.go:276] 0 containers: []
	W0429 20:10:05.147693   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:05.147704   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:05.147721   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:05.189048   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:05.189085   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:05.248635   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:05.248669   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:05.265791   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:05.265826   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:05.343190   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:05.343217   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:05.343234   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:07.926868   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:07.942581   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:07.942656   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:07.981316   66615 cri.go:89] found id: ""
	I0429 20:10:07.981349   66615 logs.go:276] 0 containers: []
	W0429 20:10:07.981361   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:10:07.981368   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:07.981429   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:08.024017   66615 cri.go:89] found id: ""
	I0429 20:10:08.024045   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.024056   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:10:08.024062   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:08.024146   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:08.075761   66615 cri.go:89] found id: ""
	I0429 20:10:08.075786   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.075798   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:10:08.075805   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:08.075864   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:08.146501   66615 cri.go:89] found id: ""
	I0429 20:10:08.146528   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.146536   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:10:08.146541   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:08.146624   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:08.204987   66615 cri.go:89] found id: ""
	I0429 20:10:08.205013   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.205021   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:10:08.205027   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:08.205083   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:08.244930   66615 cri.go:89] found id: ""
	I0429 20:10:08.244959   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.244970   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:10:08.244979   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:08.245040   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:08.284204   66615 cri.go:89] found id: ""
	I0429 20:10:08.284232   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.284243   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:08.284250   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:10:08.284305   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:10:08.324077   66615 cri.go:89] found id: ""
	I0429 20:10:08.324102   66615 logs.go:276] 0 containers: []
	W0429 20:10:08.324113   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:10:08.324123   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:08.324139   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:08.341584   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:08.341614   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:10:08.429808   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:10:08.429827   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:08.429840   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:08.509906   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:10:08.509942   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:08.562662   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:08.562697   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:07.557462   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:10.055718   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:09.553108   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:12.050533   66218 pod_ready.go:102] pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:12.543954   66218 pod_ready.go:81] duration metric: took 4m0.001047967s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" ...
	E0429 20:10:12.543994   66218 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-6mpnm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 20:10:12.544032   66218 pod_ready.go:38] duration metric: took 4m6.615064199s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:10:12.544058   66218 kubeadm.go:591] duration metric: took 4m18.60301174s to restartPrimaryControlPlane
	W0429 20:10:12.544116   66218 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:12.544146   66218 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:08.757127   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:10.760764   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:11.121673   66615 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:11.137328   66615 kubeadm.go:591] duration metric: took 4m4.72832668s to restartPrimaryControlPlane
	W0429 20:10:11.137411   66615 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:10:11.137446   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:10:13.254357   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.116867978s)
	I0429 20:10:13.254436   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:13.275293   66615 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:13.287073   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:13.298046   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:13.298080   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:13.298132   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:13.311790   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:13.311861   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:13.323201   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:13.334284   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:13.334357   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:13.348597   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.361993   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:13.362055   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:13.376185   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:13.389715   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:13.389778   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:13.403955   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:13.675887   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:10:12.056403   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:14.059895   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:13.257345   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:15.257388   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:17.259138   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:16.557200   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:18.559617   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:19.756708   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:21.757655   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:21.056581   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:23.057477   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:24.256386   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:26.757303   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:25.556902   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:28.055172   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:30.056549   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:29.256790   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:31.757538   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:32.560174   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:35.056286   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:33.758717   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:36.257274   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:37.056603   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:39.557292   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:38.757913   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:40.758857   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:42.056927   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:44.557003   66875 pod_ready.go:102] pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:44.557038   66875 pod_ready.go:81] duration metric: took 4m0.008018273s for pod "metrics-server-569cc877fc-g6gw2" in "kube-system" namespace to be "Ready" ...
	E0429 20:10:44.557050   66875 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0429 20:10:44.557062   66875 pod_ready.go:38] duration metric: took 4m2.911025288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:10:44.557085   66875 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:10:44.557123   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:44.557191   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:44.620871   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:44.620900   66875 cri.go:89] found id: ""
	I0429 20:10:44.620910   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:44.620970   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.626852   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:44.626919   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:44.673726   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:44.673753   66875 cri.go:89] found id: ""
	I0429 20:10:44.673762   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:44.673827   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.680083   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:44.680157   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:44.724866   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:44.724899   66875 cri.go:89] found id: ""
	I0429 20:10:44.724909   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:44.724976   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.730438   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:44.730492   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:44.785159   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:44.785178   66875 cri.go:89] found id: ""
	I0429 20:10:44.785185   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:44.785230   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.790370   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:44.790432   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:44.839200   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:44.839219   66875 cri.go:89] found id: ""
	I0429 20:10:44.839226   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:44.839277   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.845411   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:44.845490   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:44.907184   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:44.907210   66875 cri.go:89] found id: ""
	I0429 20:10:44.907224   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:44.907281   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:44.914531   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:44.914596   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:44.957389   66875 cri.go:89] found id: ""
	I0429 20:10:44.957422   66875 logs.go:276] 0 containers: []
	W0429 20:10:44.957430   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:44.957436   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:44.957493   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:45.001760   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:45.001783   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:45.001789   66875 cri.go:89] found id: ""
	I0429 20:10:45.001796   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:45.001845   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:45.007293   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:45.012864   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:45.012886   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:45.406875   66218 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.862702626s)
	I0429 20:10:45.406957   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:45.424927   66218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:10:45.436628   66218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:10:45.447896   66218 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:10:45.447921   66218 kubeadm.go:156] found existing configuration files:
	
	I0429 20:10:45.447970   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:10:45.458604   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:10:45.458662   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:10:45.469701   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:10:45.479738   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:10:45.479796   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:10:45.490097   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:10:45.500840   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:10:45.500903   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:10:45.512918   66218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:10:45.524679   66218 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:10:45.524756   66218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:10:45.536044   66218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:10:45.598481   66218 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:10:45.598556   66218 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:10:45.783162   66218 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:10:45.783321   66218 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:10:45.783481   66218 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:10:46.079842   66218 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:10:46.081981   66218 out.go:204]   - Generating certificates and keys ...
	I0429 20:10:46.082084   66218 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:10:46.082174   66218 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:10:46.082295   66218 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:10:46.082382   66218 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:10:46.082485   66218 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:10:46.082578   66218 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:10:46.082694   66218 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:10:46.082793   66218 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:10:46.082906   66218 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:10:46.082976   66218 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:10:46.083009   66218 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:10:46.083070   66218 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:10:46.242368   66218 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:10:46.667998   66218 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:10:46.832801   66218 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:10:47.033146   66218 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:10:47.265305   66218 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:10:47.266631   66218 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:10:47.271057   66218 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:10:47.273021   66218 out.go:204]   - Booting up control plane ...
	I0429 20:10:47.273128   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:10:47.273245   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:10:47.273333   66218 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:10:47.293530   66218 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:10:47.294487   66218 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:10:47.294564   66218 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:10:47.435669   66218 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:10:47.435802   66218 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:10:43.256983   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:45.257106   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:47.757018   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:45.564197   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:45.564231   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:45.635133   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:45.635168   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:45.779957   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:45.779992   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:45.827796   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:45.827828   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:45.870603   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:45.870636   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:45.935181   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:45.935220   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:46.007476   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:46.007518   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:46.071132   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:46.071169   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:46.130185   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:46.130218   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:46.148649   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:46.148684   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:46.196227   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:46.196266   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:46.245663   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:46.245707   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:48.789522   66875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:10:48.810752   66875 api_server.go:72] duration metric: took 4m14.399329979s to wait for apiserver process to appear ...
	I0429 20:10:48.810785   66875 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:10:48.810826   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:48.810921   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:48.868391   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:48.868415   66875 cri.go:89] found id: ""
	I0429 20:10:48.868424   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:48.868490   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.874253   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:48.874329   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:48.934057   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:48.934103   66875 cri.go:89] found id: ""
	I0429 20:10:48.934113   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:48.934173   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.940161   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:48.940244   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:48.992205   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:48.992227   66875 cri.go:89] found id: ""
	I0429 20:10:48.992234   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:48.992297   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:48.997496   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:48.997568   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:49.038579   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:49.038612   66875 cri.go:89] found id: ""
	I0429 20:10:49.038622   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:49.038683   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.045062   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:49.045129   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:49.084533   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:49.084561   66875 cri.go:89] found id: ""
	I0429 20:10:49.084570   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:49.084628   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.089601   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:49.089680   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:49.133281   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:49.133315   66875 cri.go:89] found id: ""
	I0429 20:10:49.133324   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:49.133387   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.140784   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:49.140889   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:49.201071   66875 cri.go:89] found id: ""
	I0429 20:10:49.201102   66875 logs.go:276] 0 containers: []
	W0429 20:10:49.201112   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:49.201117   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:49.201182   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:49.248708   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:49.248732   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:49.248738   66875 cri.go:89] found id: ""
	I0429 20:10:49.248747   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:49.248807   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.254131   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:49.259257   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:49.259287   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:49.325386   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:49.325417   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:49.371335   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:49.371365   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:49.414056   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:49.414112   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:49.469457   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:49.469493   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:49.523091   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:49.523123   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:49.581937   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:49.581977   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:49.599704   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:49.599738   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:49.738943   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:49.738984   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:49.814482   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:49.814521   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:50.306035   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:50.306084   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:50.371400   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:50.371485   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:50.426578   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:50.426613   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:48.438095   66218 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002489157s
	I0429 20:10:48.438230   66218 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:10:49.758262   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:52.256578   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:53.941848   66218 kubeadm.go:309] [api-check] The API server is healthy after 5.503491397s
	I0429 20:10:53.961404   66218 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:10:53.979792   66218 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:10:54.018524   66218 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:10:54.018776   66218 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-456788 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:10:54.037050   66218 kubeadm.go:309] [bootstrap-token] Using token: 793n05.pmfi0tdyn7q4x0lt
	I0429 20:10:54.038421   66218 out.go:204]   - Configuring RBAC rules ...
	I0429 20:10:54.038551   66218 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:10:54.045190   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:10:54.054625   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:10:54.060216   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:10:54.068878   66218 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:10:54.073537   66218 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:10:54.355285   66218 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:10:54.800956   66218 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:10:55.352995   66218 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:10:55.353026   66218 kubeadm.go:309] 
	I0429 20:10:55.353135   66218 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:10:55.353158   66218 kubeadm.go:309] 
	I0429 20:10:55.353245   66218 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:10:55.353254   66218 kubeadm.go:309] 
	I0429 20:10:55.353290   66218 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:10:55.353382   66218 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:10:55.353456   66218 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:10:55.353467   66218 kubeadm.go:309] 
	I0429 20:10:55.353564   66218 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:10:55.353578   66218 kubeadm.go:309] 
	I0429 20:10:55.353637   66218 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:10:55.353648   66218 kubeadm.go:309] 
	I0429 20:10:55.353735   66218 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:10:55.353937   66218 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:10:55.354052   66218 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:10:55.354095   66218 kubeadm.go:309] 
	I0429 20:10:55.354216   66218 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:10:55.354334   66218 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:10:55.354348   66218 kubeadm.go:309] 
	I0429 20:10:55.354464   66218 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 793n05.pmfi0tdyn7q4x0lt \
	I0429 20:10:55.354615   66218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 20:10:55.354643   66218 kubeadm.go:309] 	--control-plane 
	I0429 20:10:55.354667   66218 kubeadm.go:309] 
	I0429 20:10:55.354799   66218 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:10:55.354810   66218 kubeadm.go:309] 
	I0429 20:10:55.354943   66218 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 793n05.pmfi0tdyn7q4x0lt \
	I0429 20:10:55.355111   66218 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 20:10:55.355493   66218 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:10:55.355513   66218 cni.go:84] Creating CNI manager for ""
	I0429 20:10:55.355520   66218 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:10:55.357341   66218 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:10:52.999575   66875 api_server.go:253] Checking apiserver healthz at https://192.168.61.106:8444/healthz ...
	I0429 20:10:53.005598   66875 api_server.go:279] https://192.168.61.106:8444/healthz returned 200:
	ok
	I0429 20:10:53.006923   66875 api_server.go:141] control plane version: v1.30.0
	I0429 20:10:53.006951   66875 api_server.go:131] duration metric: took 4.196158371s to wait for apiserver health ...
	I0429 20:10:53.006978   66875 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:10:53.007011   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:10:53.007073   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:10:53.064156   66875 cri.go:89] found id: "40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:53.064186   66875 cri.go:89] found id: ""
	I0429 20:10:53.064196   66875 logs.go:276] 1 containers: [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552]
	I0429 20:10:53.064256   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.069282   66875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:10:53.069361   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:10:53.128981   66875 cri.go:89] found id: "7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:53.129016   66875 cri.go:89] found id: ""
	I0429 20:10:53.129025   66875 logs.go:276] 1 containers: [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f]
	I0429 20:10:53.129086   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.134680   66875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:10:53.134779   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:10:53.188828   66875 cri.go:89] found id: "ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:53.188857   66875 cri.go:89] found id: ""
	I0429 20:10:53.188869   66875 logs.go:276] 1 containers: [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52]
	I0429 20:10:53.188922   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.195332   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:10:53.195401   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:10:53.245528   66875 cri.go:89] found id: "38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:53.245548   66875 cri.go:89] found id: ""
	I0429 20:10:53.245556   66875 logs.go:276] 1 containers: [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0]
	I0429 20:10:53.245617   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.251849   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:10:53.251925   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:10:53.302914   66875 cri.go:89] found id: "5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:53.302941   66875 cri.go:89] found id: ""
	I0429 20:10:53.302950   66875 logs.go:276] 1 containers: [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561]
	I0429 20:10:53.303004   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.308072   66875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:10:53.308138   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:10:53.358655   66875 cri.go:89] found id: "453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:53.358684   66875 cri.go:89] found id: ""
	I0429 20:10:53.358693   66875 logs.go:276] 1 containers: [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9]
	I0429 20:10:53.358753   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.363796   66875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:10:53.363875   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:10:53.413543   66875 cri.go:89] found id: ""
	I0429 20:10:53.413573   66875 logs.go:276] 0 containers: []
	W0429 20:10:53.413586   66875 logs.go:278] No container was found matching "kindnet"
	I0429 20:10:53.413593   66875 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0429 20:10:53.413651   66875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0429 20:10:53.457365   66875 cri.go:89] found id: "55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:53.457393   66875 cri.go:89] found id: "d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:53.457399   66875 cri.go:89] found id: ""
	I0429 20:10:53.457409   66875 logs.go:276] 2 containers: [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9]
	I0429 20:10:53.457473   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.464321   66875 ssh_runner.go:195] Run: which crictl
	I0429 20:10:53.469358   66875 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:10:53.469377   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0429 20:10:53.605546   66875 logs.go:123] Gathering logs for kube-controller-manager [453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9] ...
	I0429 20:10:53.605594   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 453c723fef9adaf1366c6dcbbf0824aa66761b6ab1c458dedf70910ff38a27a9"
	I0429 20:10:53.682788   66875 logs.go:123] Gathering logs for storage-provisioner [55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412] ...
	I0429 20:10:53.682837   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55a4d86ba249fd4118d8096c962d43ea842641cc11e5b518256d409825207412"
	I0429 20:10:53.725985   66875 logs.go:123] Gathering logs for storage-provisioner [d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9] ...
	I0429 20:10:53.726017   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d235258efef8ba725ca62190a715c6c9849ef7b6428b8b76481de6a56f153ba9"
	I0429 20:10:53.775864   66875 logs.go:123] Gathering logs for kubelet ...
	I0429 20:10:53.775890   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:10:53.834762   66875 logs.go:123] Gathering logs for dmesg ...
	I0429 20:10:53.834801   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:10:53.853796   66875 logs.go:123] Gathering logs for kube-apiserver [40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552] ...
	I0429 20:10:53.853830   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40e61b985a70c30681b1a1021ba5d064ce3e551092e6c0ee8d8037c51b498552"
	I0429 20:10:53.915651   66875 logs.go:123] Gathering logs for etcd [7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f] ...
	I0429 20:10:53.915680   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7813548bb1ebb37251c181bac33b85fafc7a3637530ab57960585294e2506f8f"
	I0429 20:10:53.968857   66875 logs.go:123] Gathering logs for coredns [ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52] ...
	I0429 20:10:53.968885   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff819232db9ec756ffa29421c3f2fc541a2f570e446053990a70918746bb5c52"
	I0429 20:10:54.024061   66875 logs.go:123] Gathering logs for kube-scheduler [38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0] ...
	I0429 20:10:54.024090   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c3d9d672593ffacce501e2a106b1042c11cad115936473a38226af55d9b0e0"
	I0429 20:10:54.079637   66875 logs.go:123] Gathering logs for kube-proxy [5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561] ...
	I0429 20:10:54.079674   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5291e43ebc5a398517cfd2682128ff792101ac58ab8ee9cd1c98272eff98a561"
	I0429 20:10:54.129296   66875 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:10:54.129325   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0429 20:10:54.499803   66875 logs.go:123] Gathering logs for container status ...
	I0429 20:10:54.499861   66875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:10:57.070245   66875 system_pods.go:59] 8 kube-system pods found
	I0429 20:10:57.070288   66875 system_pods.go:61] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running
	I0429 20:10:57.070296   66875 system_pods.go:61] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running
	I0429 20:10:57.070302   66875 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running
	I0429 20:10:57.070308   66875 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running
	I0429 20:10:57.070313   66875 system_pods.go:61] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running
	I0429 20:10:57.070318   66875 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running
	I0429 20:10:57.070329   66875 system_pods.go:61] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:10:57.070339   66875 system_pods.go:61] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running
	I0429 20:10:57.070353   66875 system_pods.go:74] duration metric: took 4.063366088s to wait for pod list to return data ...
	I0429 20:10:57.070366   66875 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:10:57.077008   66875 default_sa.go:45] found service account: "default"
	I0429 20:10:57.077031   66875 default_sa.go:55] duration metric: took 6.655489ms for default service account to be created ...
	I0429 20:10:57.077040   66875 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:10:57.087665   66875 system_pods.go:86] 8 kube-system pods found
	I0429 20:10:57.087695   66875 system_pods.go:89] "coredns-7db6d8ff4d-7m65s" [72397559-b0da-492a-be1c-297027021f50] Running
	I0429 20:10:57.087701   66875 system_pods.go:89] "etcd-default-k8s-diff-port-866143" [a2f00c6c-e22e-4f0e-b91e-f039f40b2e2e] Running
	I0429 20:10:57.087707   66875 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866143" [ce3cd4e5-c057-4eed-bfb1-6602f86cb357] Running
	I0429 20:10:57.087711   66875 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866143" [c9a320b7-4ce8-4662-ae2a-fdf3e26312d5] Running
	I0429 20:10:57.087715   66875 system_pods.go:89] "kube-proxy-zddtx" [3d47956c-26c1-48e2-8f42-a2a81d201503] Running
	I0429 20:10:57.087719   66875 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866143" [3aa5108c-167e-4efe-b612-6df834802755] Running
	I0429 20:10:57.087726   66875 system_pods.go:89] "metrics-server-569cc877fc-g6gw2" [7a4b0494-73fb-4444-a8c1-544885a2d873] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:10:57.087730   66875 system_pods.go:89] "storage-provisioner" [160d0154-7417-454b-a253-28c67b85f951] Running
	I0429 20:10:57.087740   66875 system_pods.go:126] duration metric: took 10.694398ms to wait for k8s-apps to be running ...
	I0429 20:10:57.087749   66875 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:10:57.087794   66875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:10:57.106878   66875 system_svc.go:56] duration metric: took 19.118595ms WaitForService to wait for kubelet
	I0429 20:10:57.106917   66875 kubeadm.go:576] duration metric: took 4m22.695498557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:10:57.106945   66875 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:10:57.111052   66875 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:10:57.111082   66875 node_conditions.go:123] node cpu capacity is 2
	I0429 20:10:57.111096   66875 node_conditions.go:105] duration metric: took 4.144283ms to run NodePressure ...
	I0429 20:10:57.111112   66875 start.go:240] waiting for startup goroutines ...
	I0429 20:10:57.111122   66875 start.go:245] waiting for cluster config update ...
	I0429 20:10:57.111141   66875 start.go:254] writing updated cluster config ...
	I0429 20:10:57.111536   66875 ssh_runner.go:195] Run: rm -f paused
	I0429 20:10:57.169536   66875 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:10:57.172347   66875 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-866143" cluster and "default" namespace by default
	I0429 20:10:55.358683   66218 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:10:55.371397   66218 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:10:55.397119   66218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:10:55.397192   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:55.397192   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-456788 minikube.k8s.io/updated_at=2024_04_29T20_10_55_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=no-preload-456788 minikube.k8s.io/primary=true
	I0429 20:10:55.605222   66218 ops.go:34] apiserver oom_adj: -16
	I0429 20:10:55.605588   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:56.106450   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:56.605894   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:57.105657   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:57.605823   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:54.258101   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:56.258336   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:10:58.106263   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:58.605675   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:59.106483   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:59.605671   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:00.105670   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:00.605695   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:01.106482   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:01.606206   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:02.106534   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:02.606372   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:10:58.756416   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:00.756875   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:02.756955   65980 pod_ready.go:102] pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace has status "Ready":"False"
	I0429 20:11:03.106555   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:03.606298   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.106227   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.606531   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:05.105708   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:05.605735   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:06.106556   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:06.606380   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:07.105690   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:07.605718   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:04.749964   65980 pod_ready.go:81] duration metric: took 4m0.000195525s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" ...
	E0429 20:11:04.749999   65980 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-c4h7f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0429 20:11:04.750024   65980 pod_ready.go:38] duration metric: took 4m6.211964949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:04.750053   65980 kubeadm.go:591] duration metric: took 4m17.268163648s to restartPrimaryControlPlane
	W0429 20:11:04.750123   65980 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0429 20:11:04.750156   65980 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:11:08.106383   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:08.606498   66218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:08.726533   66218 kubeadm.go:1107] duration metric: took 13.329402445s to wait for elevateKubeSystemPrivileges
	W0429 20:11:08.726584   66218 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:11:08.726596   66218 kubeadm.go:393] duration metric: took 5m14.838913251s to StartCluster
	I0429 20:11:08.726617   66218 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:08.726706   66218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:11:08.729364   66218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:08.730202   66218 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.235 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:11:08.731600   66218 out.go:177] * Verifying Kubernetes components...
	I0429 20:11:08.730245   66218 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:11:08.730446   66218 config.go:182] Loaded profile config "no-preload-456788": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:11:08.733479   66218 addons.go:69] Setting storage-provisioner=true in profile "no-preload-456788"
	I0429 20:11:08.733509   66218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:11:08.733518   66218 addons.go:69] Setting default-storageclass=true in profile "no-preload-456788"
	I0429 20:11:08.733540   66218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-456788"
	I0429 20:11:08.733514   66218 addons.go:234] Setting addon storage-provisioner=true in "no-preload-456788"
	W0429 20:11:08.733641   66218 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:11:08.733674   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.733963   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.733988   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.734081   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.734079   66218 addons.go:69] Setting metrics-server=true in profile "no-preload-456788"
	I0429 20:11:08.734106   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.734117   66218 addons.go:234] Setting addon metrics-server=true in "no-preload-456788"
	W0429 20:11:08.734126   66218 addons.go:243] addon metrics-server should already be in state true
	I0429 20:11:08.734154   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.734503   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.734536   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.754451   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0429 20:11:08.754650   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0429 20:11:08.754827   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46779
	I0429 20:11:08.755114   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755237   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755332   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.755884   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.755905   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756031   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.756048   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.756050   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756062   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.756456   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756477   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756513   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.756853   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.757231   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.757254   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.757256   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.757291   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.761534   66218 addons.go:234] Setting addon default-storageclass=true in "no-preload-456788"
	W0429 20:11:08.761551   66218 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:11:08.761574   66218 host.go:66] Checking if "no-preload-456788" exists ...
	I0429 20:11:08.761857   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.761894   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.776659   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0429 20:11:08.776838   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0429 20:11:08.777067   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.777462   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.777643   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.777657   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.778152   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.778162   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.778170   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.778371   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.778845   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.778901   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0429 20:11:08.779220   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.779415   66218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:08.779446   66218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:08.779621   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.779634   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.780051   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.780246   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.780506   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.782432   66218 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:11:08.783809   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:11:08.783825   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:11:08.783843   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.782370   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.786004   66218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:11:08.787488   66218 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:08.787506   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:11:08.787663   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.788245   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.788290   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.788308   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.788381   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.788632   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.788834   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.788985   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:08.791587   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.791964   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.792052   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.792293   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.792477   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.792614   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.792712   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:08.798944   66218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I0429 20:11:08.799562   66218 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:08.800224   66218 main.go:141] libmachine: Using API Version  1
	I0429 20:11:08.800243   66218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:08.800790   66218 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:08.801008   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetState
	I0429 20:11:08.803220   66218 main.go:141] libmachine: (no-preload-456788) Calling .DriverName
	I0429 20:11:08.803519   66218 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:08.803534   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:11:08.803552   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHHostname
	I0429 20:11:08.806797   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.807216   66218 main.go:141] libmachine: (no-preload-456788) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:ae:18", ip: ""} in network mk-no-preload-456788: {Iface:virbr1 ExpiryTime:2024-04-29 20:56:43 +0000 UTC Type:0 Mac:52:54:00:15:ae:18 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:no-preload-456788 Clientid:01:52:54:00:15:ae:18}
	I0429 20:11:08.807244   66218 main.go:141] libmachine: (no-preload-456788) DBG | domain no-preload-456788 has defined IP address 192.168.39.235 and MAC address 52:54:00:15:ae:18 in network mk-no-preload-456788
	I0429 20:11:08.807540   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHPort
	I0429 20:11:08.807986   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHKeyPath
	I0429 20:11:08.808170   66218 main.go:141] libmachine: (no-preload-456788) Calling .GetSSHUsername
	I0429 20:11:08.808313   66218 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/no-preload-456788/id_rsa Username:docker}
	I0429 20:11:09.006753   66218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:11:09.038156   66218 node_ready.go:35] waiting up to 6m0s for node "no-preload-456788" to be "Ready" ...
	I0429 20:11:09.051516   66218 node_ready.go:49] node "no-preload-456788" has status "Ready":"True"
	I0429 20:11:09.051545   66218 node_ready.go:38] duration metric: took 13.34705ms for node "no-preload-456788" to be "Ready" ...
	I0429 20:11:09.051557   66218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:09.064032   66218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:09.308339   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:09.308749   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:11:09.308773   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:11:09.309961   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:09.347829   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:11:09.347860   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:11:09.466683   66218 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:11:09.466718   66218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:11:09.678800   66218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:11:09.718867   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.718899   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.719248   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.719276   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.719273   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:09.719288   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.719296   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.719553   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.719574   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.719581   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:09.726177   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:09.726204   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:09.726527   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:09.726544   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:09.726590   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:10.570942   66218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.260944092s)
	I0429 20:11:10.571001   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.571012   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.571480   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.571504   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.571520   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.571528   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.571792   66218 main.go:141] libmachine: (no-preload-456788) DBG | Closing plugin on server side
	I0429 20:11:10.571818   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.571833   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.912211   66218 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.233359134s)
	I0429 20:11:10.912282   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.912298   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.912746   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.912769   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.912779   66218 main.go:141] libmachine: Making call to close driver server
	I0429 20:11:10.912787   66218 main.go:141] libmachine: (no-preload-456788) Calling .Close
	I0429 20:11:10.913055   66218 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:11:10.913108   66218 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:11:10.913132   66218 addons.go:470] Verifying addon metrics-server=true in "no-preload-456788"
	I0429 20:11:10.916694   66218 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0429 20:11:10.918273   66218 addons.go:505] duration metric: took 2.188028967s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0429 20:11:11.108067   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.108091   66218 pod_ready.go:81] duration metric: took 2.044032617s for pod "coredns-7db6d8ff4d-hcfbq" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.108103   66218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.115163   66218 pod_ready.go:92] pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.115196   66218 pod_ready.go:81] duration metric: took 7.084503ms for pod "coredns-7db6d8ff4d-pvhwv" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.115210   66218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.129264   66218 pod_ready.go:92] pod "etcd-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.129286   66218 pod_ready.go:81] duration metric: took 14.068541ms for pod "etcd-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.129297   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.148114   66218 pod_ready.go:92] pod "kube-apiserver-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.148142   66218 pod_ready.go:81] duration metric: took 18.837962ms for pod "kube-apiserver-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.148155   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.157985   66218 pod_ready.go:92] pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.158006   66218 pod_ready.go:81] duration metric: took 9.844321ms for pod "kube-controller-manager-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.158016   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6m95d" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.469680   66218 pod_ready.go:92] pod "kube-proxy-6m95d" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.469701   66218 pod_ready.go:81] duration metric: took 311.678646ms for pod "kube-proxy-6m95d" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.469710   66218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.868513   66218 pod_ready.go:92] pod "kube-scheduler-no-preload-456788" in "kube-system" namespace has status "Ready":"True"
	I0429 20:11:11.868539   66218 pod_ready.go:81] duration metric: took 398.821528ms for pod "kube-scheduler-no-preload-456788" in "kube-system" namespace to be "Ready" ...
	I0429 20:11:11.868550   66218 pod_ready.go:38] duration metric: took 2.816983409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:11:11.868569   66218 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:11:11.868632   66218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:11:11.885115   66218 api_server.go:72] duration metric: took 3.154873937s to wait for apiserver process to appear ...
	I0429 20:11:11.885146   66218 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:11:11.885169   66218 api_server.go:253] Checking apiserver healthz at https://192.168.39.235:8443/healthz ...
	I0429 20:11:11.890715   66218 api_server.go:279] https://192.168.39.235:8443/healthz returned 200:
	ok
	I0429 20:11:11.891649   66218 api_server.go:141] control plane version: v1.30.0
	I0429 20:11:11.891671   66218 api_server.go:131] duration metric: took 6.518818ms to wait for apiserver health ...
	I0429 20:11:11.891679   66218 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:11:12.072142   66218 system_pods.go:59] 9 kube-system pods found
	I0429 20:11:12.072175   66218 system_pods.go:61] "coredns-7db6d8ff4d-hcfbq" [c0b53824-478e-4523-ada4-1cd7ba306c81] Running
	I0429 20:11:12.072183   66218 system_pods.go:61] "coredns-7db6d8ff4d-pvhwv" [f38ee7b3-53fe-4609-9b2b-000f55de5d5c] Running
	I0429 20:11:12.072188   66218 system_pods.go:61] "etcd-no-preload-456788" [b0629d4c-643a-485d-aa85-33fe009fff50] Running
	I0429 20:11:12.072194   66218 system_pods.go:61] "kube-apiserver-no-preload-456788" [e56edf5c-9883-4cd9-abab-09902048f584] Running
	I0429 20:11:12.072200   66218 system_pods.go:61] "kube-controller-manager-no-preload-456788" [bfaf44f0-da19-4cec-bec9-d9917cb8a571] Running
	I0429 20:11:12.072205   66218 system_pods.go:61] "kube-proxy-6m95d" [25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7] Running
	I0429 20:11:12.072209   66218 system_pods.go:61] "kube-scheduler-no-preload-456788" [de4f90f7-05d6-4755-a4c0-2c522f7fe88c] Running
	I0429 20:11:12.072217   66218 system_pods.go:61] "metrics-server-569cc877fc-sxgwr" [046d28fe-d51e-43ba-9550-d1d7e33d9d84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:11:12.072224   66218 system_pods.go:61] "storage-provisioner" [fd1c4813-8889-4f21-b21e-6007eaa163a6] Running
	I0429 20:11:12.072247   66218 system_pods.go:74] duration metric: took 180.561509ms to wait for pod list to return data ...
	I0429 20:11:12.072256   66218 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:11:12.267637   66218 default_sa.go:45] found service account: "default"
	I0429 20:11:12.267663   66218 default_sa.go:55] duration metric: took 195.398841ms for default service account to be created ...
	I0429 20:11:12.267677   66218 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:11:12.471933   66218 system_pods.go:86] 9 kube-system pods found
	I0429 20:11:12.471967   66218 system_pods.go:89] "coredns-7db6d8ff4d-hcfbq" [c0b53824-478e-4523-ada4-1cd7ba306c81] Running
	I0429 20:11:12.471975   66218 system_pods.go:89] "coredns-7db6d8ff4d-pvhwv" [f38ee7b3-53fe-4609-9b2b-000f55de5d5c] Running
	I0429 20:11:12.471981   66218 system_pods.go:89] "etcd-no-preload-456788" [b0629d4c-643a-485d-aa85-33fe009fff50] Running
	I0429 20:11:12.471987   66218 system_pods.go:89] "kube-apiserver-no-preload-456788" [e56edf5c-9883-4cd9-abab-09902048f584] Running
	I0429 20:11:12.471994   66218 system_pods.go:89] "kube-controller-manager-no-preload-456788" [bfaf44f0-da19-4cec-bec9-d9917cb8a571] Running
	I0429 20:11:12.471999   66218 system_pods.go:89] "kube-proxy-6m95d" [25d3c0a6-7850-43de-a0e1-0d2ca3c3e1c7] Running
	I0429 20:11:12.472008   66218 system_pods.go:89] "kube-scheduler-no-preload-456788" [de4f90f7-05d6-4755-a4c0-2c522f7fe88c] Running
	I0429 20:11:12.472020   66218 system_pods.go:89] "metrics-server-569cc877fc-sxgwr" [046d28fe-d51e-43ba-9550-d1d7e33d9d84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:11:12.472027   66218 system_pods.go:89] "storage-provisioner" [fd1c4813-8889-4f21-b21e-6007eaa163a6] Running
	I0429 20:11:12.472039   66218 system_pods.go:126] duration metric: took 204.355515ms to wait for k8s-apps to be running ...
	I0429 20:11:12.472052   66218 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:11:12.472110   66218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:11:12.487748   66218 system_svc.go:56] duration metric: took 15.68796ms WaitForService to wait for kubelet
	I0429 20:11:12.487779   66218 kubeadm.go:576] duration metric: took 3.757538662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:11:12.487804   66218 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:11:12.668597   66218 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:11:12.668619   66218 node_conditions.go:123] node cpu capacity is 2
	I0429 20:11:12.668629   66218 node_conditions.go:105] duration metric: took 180.819727ms to run NodePressure ...
	I0429 20:11:12.668640   66218 start.go:240] waiting for startup goroutines ...
	I0429 20:11:12.668646   66218 start.go:245] waiting for cluster config update ...
	I0429 20:11:12.668656   66218 start.go:254] writing updated cluster config ...
	I0429 20:11:12.668905   66218 ssh_runner.go:195] Run: rm -f paused
	I0429 20:11:12.718997   66218 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:11:12.720757   66218 out.go:177] * Done! kubectl is now configured to use "no-preload-456788" cluster and "default" namespace by default
	I0429 20:11:37.819019   65980 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.068841912s)
	I0429 20:11:37.819092   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:11:37.836850   65980 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 20:11:37.849684   65980 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:11:37.861597   65980 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:11:37.861626   65980 kubeadm.go:156] found existing configuration files:
	
	I0429 20:11:37.861674   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:11:37.872799   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:11:37.872860   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:11:37.884336   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:11:37.895124   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:11:37.895181   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:11:37.906874   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:11:37.917482   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:11:37.917530   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:11:37.928137   65980 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:11:37.938698   65980 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:11:37.938750   65980 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:11:37.949658   65980 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:11:38.159358   65980 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:11:46.848042   65980 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 20:11:46.848108   65980 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:11:46.848169   65980 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:11:46.848308   65980 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:11:46.848447   65980 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:11:46.848531   65980 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:11:46.850368   65980 out.go:204]   - Generating certificates and keys ...
	I0429 20:11:46.850444   65980 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:11:46.850496   65980 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:11:46.850580   65980 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:11:46.850649   65980 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:11:46.850742   65980 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:11:46.850850   65980 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:11:46.850949   65980 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:11:46.851018   65980 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:11:46.851117   65980 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:11:46.851201   65980 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:11:46.851263   65980 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:11:46.851327   65980 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:11:46.851395   65980 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:11:46.851466   65980 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 20:11:46.851513   65980 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:11:46.851605   65980 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:11:46.851690   65980 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:11:46.851791   65980 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:11:46.851878   65980 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:11:46.853420   65980 out.go:204]   - Booting up control plane ...
	I0429 20:11:46.853526   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:11:46.853617   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:11:46.853696   65980 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:11:46.853791   65980 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:11:46.853866   65980 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:11:46.853900   65980 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:11:46.854010   65980 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 20:11:46.854094   65980 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 20:11:46.854148   65980 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.976221ms
	I0429 20:11:46.854240   65980 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 20:11:46.854311   65980 kubeadm.go:309] [api-check] The API server is healthy after 5.50298765s
	I0429 20:11:46.854407   65980 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 20:11:46.854509   65980 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 20:11:46.854565   65980 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 20:11:46.854726   65980 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-161370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 20:11:46.854783   65980 kubeadm.go:309] [bootstrap-token] Using token: 93xwhj.zowa67wvl54p1iru
	I0429 20:11:46.856308   65980 out.go:204]   - Configuring RBAC rules ...
	I0429 20:11:46.856452   65980 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 20:11:46.856561   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 20:11:46.856736   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 20:11:46.856867   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 20:11:46.857018   65980 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 20:11:46.857140   65980 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 20:11:46.857294   65980 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 20:11:46.857358   65980 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 20:11:46.857419   65980 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 20:11:46.857428   65980 kubeadm.go:309] 
	I0429 20:11:46.857502   65980 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 20:11:46.857514   65980 kubeadm.go:309] 
	I0429 20:11:46.857606   65980 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 20:11:46.857617   65980 kubeadm.go:309] 
	I0429 20:11:46.857649   65980 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 20:11:46.857725   65980 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 20:11:46.857797   65980 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 20:11:46.857806   65980 kubeadm.go:309] 
	I0429 20:11:46.857880   65980 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 20:11:46.857889   65980 kubeadm.go:309] 
	I0429 20:11:46.857947   65980 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 20:11:46.857955   65980 kubeadm.go:309] 
	I0429 20:11:46.858020   65980 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 20:11:46.858125   65980 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 20:11:46.858216   65980 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 20:11:46.858224   65980 kubeadm.go:309] 
	I0429 20:11:46.858325   65980 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 20:11:46.858433   65980 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 20:11:46.858442   65980 kubeadm.go:309] 
	I0429 20:11:46.858553   65980 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 93xwhj.zowa67wvl54p1iru \
	I0429 20:11:46.858696   65980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 \
	I0429 20:11:46.858722   65980 kubeadm.go:309] 	--control-plane 
	I0429 20:11:46.858728   65980 kubeadm.go:309] 
	I0429 20:11:46.858797   65980 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 20:11:46.858803   65980 kubeadm.go:309] 
	I0429 20:11:46.858881   65980 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 93xwhj.zowa67wvl54p1iru \
	I0429 20:11:46.859014   65980 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:02121775bba471643be59ce614eadfe1c831d473f031ea5adf9984f2794f57f3 
	I0429 20:11:46.859025   65980 cni.go:84] Creating CNI manager for ""
	I0429 20:11:46.859034   65980 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 20:11:46.861619   65980 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0429 20:11:46.863111   65980 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0429 20:11:46.875965   65980 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0429 20:11:46.897147   65980 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 20:11:46.897225   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:46.897238   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-161370 minikube.k8s.io/updated_at=2024_04_29T20_11_46_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=embed-certs-161370 minikube.k8s.io/primary=true
	I0429 20:11:46.927555   65980 ops.go:34] apiserver oom_adj: -16
	I0429 20:11:47.119594   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:47.620640   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:48.119974   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:48.620618   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:49.120107   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:49.620349   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:50.120180   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:50.620533   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:51.120332   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:51.620669   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:52.119922   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:52.620467   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:53.120486   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:53.620314   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:54.120159   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:54.620430   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:55.119995   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:55.620496   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:56.120152   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:56.620390   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:57.120090   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:57.619671   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:58.120549   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:58.620334   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.120532   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.619732   65980 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 20:11:59.765502   65980 kubeadm.go:1107] duration metric: took 12.868344365s to wait for elevateKubeSystemPrivileges
	W0429 20:11:59.765550   65980 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 20:11:59.765561   65980 kubeadm.go:393] duration metric: took 5m12.339650014s to StartCluster
	I0429 20:11:59.765582   65980 settings.go:142] acquiring lock: {Name:mkacaaa4820eb15b252abf8e52ffd37a8556d4be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:59.765671   65980 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 20:11:59.767924   65980 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/kubeconfig: {Name:mk5537af909977fd28b8d1e9176a714e6322e1c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 20:11:59.768253   65980 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0429 20:11:59.769950   65980 out.go:177] * Verifying Kubernetes components...
	I0429 20:11:59.768323   65980 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 20:11:59.768433   65980 config.go:182] Loaded profile config "embed-certs-161370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 20:11:59.771281   65980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 20:11:59.771300   65980 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-161370"
	I0429 20:11:59.771313   65980 addons.go:69] Setting default-storageclass=true in profile "embed-certs-161370"
	I0429 20:11:59.771332   65980 addons.go:69] Setting metrics-server=true in profile "embed-certs-161370"
	I0429 20:11:59.771344   65980 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-161370"
	W0429 20:11:59.771355   65980 addons.go:243] addon storage-provisioner should already be in state true
	I0429 20:11:59.771361   65980 addons.go:234] Setting addon metrics-server=true in "embed-certs-161370"
	W0429 20:11:59.771370   65980 addons.go:243] addon metrics-server should already be in state true
	I0429 20:11:59.771399   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.771401   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.771354   65980 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-161370"
	I0429 20:11:59.771757   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771768   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771772   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.771783   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.771786   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.771788   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.787359   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0429 20:11:59.787384   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0429 20:11:59.787503   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46153
	I0429 20:11:59.787764   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.787987   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.788069   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.788254   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788273   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.788708   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788724   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.788773   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.788832   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.788852   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.789102   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.789117   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.789267   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.789478   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.789510   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.790170   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.790220   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.792108   65980 addons.go:234] Setting addon default-storageclass=true in "embed-certs-161370"
	W0429 20:11:59.792127   65980 addons.go:243] addon default-storageclass should already be in state true
	I0429 20:11:59.792154   65980 host.go:66] Checking if "embed-certs-161370" exists ...
	I0429 20:11:59.792386   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.792424   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.808581   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35659
	I0429 20:11:59.808924   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44943
	I0429 20:11:59.808943   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.809461   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.809481   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.809561   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.809791   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.810335   65980 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/18774-7754/.minikube/bin/docker-machine-driver-kvm2
	I0429 20:11:59.810357   65980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 20:11:59.810976   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.810992   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.811324   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.811604   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0429 20:11:59.811758   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.812141   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.812592   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.812610   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.813130   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.813351   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.813614   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.815589   65980 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0429 20:11:59.817004   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0429 20:11:59.817014   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0429 20:11:59.817027   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.815020   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.818585   65980 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 20:11:59.820110   65980 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:11:59.820125   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 20:11:59.820140   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.819840   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.820305   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.820333   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.820563   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.820722   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.820874   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.820998   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.822849   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.823299   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.823323   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.823460   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.823599   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.823924   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.824039   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.827552   65980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0429 20:11:59.827976   65980 main.go:141] libmachine: () Calling .GetVersion
	I0429 20:11:59.828369   65980 main.go:141] libmachine: Using API Version  1
	I0429 20:11:59.828389   65980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 20:11:59.828754   65980 main.go:141] libmachine: () Calling .GetMachineName
	I0429 20:11:59.828921   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetState
	I0429 20:11:59.830295   65980 main.go:141] libmachine: (embed-certs-161370) Calling .DriverName
	I0429 20:11:59.830566   65980 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 20:11:59.830578   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 20:11:59.830590   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHHostname
	I0429 20:11:59.833174   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.833526   65980 main.go:141] libmachine: (embed-certs-161370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:05:1f", ip: ""} in network mk-embed-certs-161370: {Iface:virbr2 ExpiryTime:2024-04-29 21:06:30 +0000 UTC Type:0 Mac:52:54:00:e6:05:1f Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:embed-certs-161370 Clientid:01:52:54:00:e6:05:1f}
	I0429 20:11:59.833545   65980 main.go:141] libmachine: (embed-certs-161370) DBG | domain embed-certs-161370 has defined IP address 192.168.50.184 and MAC address 52:54:00:e6:05:1f in network mk-embed-certs-161370
	I0429 20:11:59.833759   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHPort
	I0429 20:11:59.833910   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHKeyPath
	I0429 20:11:59.834029   65980 main.go:141] libmachine: (embed-certs-161370) Calling .GetSSHUsername
	I0429 20:11:59.834166   65980 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/embed-certs-161370/id_rsa Username:docker}
	I0429 20:11:59.978978   65980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 20:11:59.995547   65980 node_ready.go:35] waiting up to 6m0s for node "embed-certs-161370" to be "Ready" ...
	I0429 20:12:00.003802   65980 node_ready.go:49] node "embed-certs-161370" has status "Ready":"True"
	I0429 20:12:00.003823   65980 node_ready.go:38] duration metric: took 8.245639ms for node "embed-certs-161370" to be "Ready" ...
	I0429 20:12:00.003833   65980 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:12:00.010487   65980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:00.072627   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 20:12:00.075716   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 20:12:00.177043   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0429 20:12:00.177069   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0429 20:12:00.278082   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0429 20:12:00.278112   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0429 20:12:00.311731   65980 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:12:00.311756   65980 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0429 20:12:00.369982   65980 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0429 20:12:00.642840   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.642865   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643084   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.643109   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643227   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.643240   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.643248   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.643256   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.643374   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.645085   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645103   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.645112   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.645121   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.645196   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645228   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.645231   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.645331   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.645343   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:00.658929   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:00.658955   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:00.659236   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:00.659267   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:00.659281   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.103183   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:01.103207   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:01.103488   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:01.103542   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.103557   65980 main.go:141] libmachine: Making call to close driver server
	I0429 20:12:01.103541   65980 main.go:141] libmachine: (embed-certs-161370) DBG | Closing plugin on server side
	I0429 20:12:01.103584   65980 main.go:141] libmachine: (embed-certs-161370) Calling .Close
	I0429 20:12:01.105440   65980 main.go:141] libmachine: Successfully made call to close driver server
	I0429 20:12:01.105461   65980 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 20:12:01.105473   65980 addons.go:470] Verifying addon metrics-server=true in "embed-certs-161370"
	I0429 20:12:01.107435   65980 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0429 20:12:01.109051   65980 addons.go:505] duration metric: took 1.340729876s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0429 20:12:02.029772   65980 pod_ready.go:102] pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace has status "Ready":"False"
	I0429 20:12:02.520396   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.520417   65980 pod_ready.go:81] duration metric: took 2.509903724s for pod "coredns-7db6d8ff4d-7z6zv" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.520426   65980 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.529115   65980 pod_ready.go:92] pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.529141   65980 pod_ready.go:81] duration metric: took 8.707165ms for pod "coredns-7db6d8ff4d-rr6bd" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.529153   65980 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.539459   65980 pod_ready.go:92] pod "etcd-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.539478   65980 pod_ready.go:81] duration metric: took 10.318294ms for pod "etcd-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.539489   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.543813   65980 pod_ready.go:92] pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.543830   65980 pod_ready.go:81] duration metric: took 4.333619ms for pod "kube-apiserver-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.543839   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.549343   65980 pod_ready.go:92] pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.549363   65980 pod_ready.go:81] duration metric: took 5.516323ms for pod "kube-controller-manager-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.549374   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wq48j" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.915209   65980 pod_ready.go:92] pod "kube-proxy-wq48j" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:02.915232   65980 pod_ready.go:81] duration metric: took 365.851814ms for pod "kube-proxy-wq48j" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:02.915240   65980 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:03.315564   65980 pod_ready.go:92] pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace has status "Ready":"True"
	I0429 20:12:03.315587   65980 pod_ready.go:81] duration metric: took 400.340876ms for pod "kube-scheduler-embed-certs-161370" in "kube-system" namespace to be "Ready" ...
	I0429 20:12:03.315595   65980 pod_ready.go:38] duration metric: took 3.311752591s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0429 20:12:03.315609   65980 api_server.go:52] waiting for apiserver process to appear ...
	I0429 20:12:03.315655   65980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 20:12:03.333491   65980 api_server.go:72] duration metric: took 3.565200855s to wait for apiserver process to appear ...
	I0429 20:12:03.333521   65980 api_server.go:88] waiting for apiserver healthz status ...
	I0429 20:12:03.333538   65980 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8443/healthz ...
	I0429 20:12:03.338822   65980 api_server.go:279] https://192.168.50.184:8443/healthz returned 200:
	ok
	I0429 20:12:03.339975   65980 api_server.go:141] control plane version: v1.30.0
	I0429 20:12:03.339995   65980 api_server.go:131] duration metric: took 6.468233ms to wait for apiserver health ...
	I0429 20:12:03.340002   65980 system_pods.go:43] waiting for kube-system pods to appear ...
	I0429 20:12:03.519016   65980 system_pods.go:59] 9 kube-system pods found
	I0429 20:12:03.519042   65980 system_pods.go:61] "coredns-7db6d8ff4d-7z6zv" [422451a2-615d-4bf8-8de8-d5fa5805219f] Running
	I0429 20:12:03.519047   65980 system_pods.go:61] "coredns-7db6d8ff4d-rr6bd" [6d14ff20-6dab-4c02-b91c-0a1e326f1593] Running
	I0429 20:12:03.519050   65980 system_pods.go:61] "etcd-embed-certs-161370" [ab19e79c-18bd-4d0d-b5cf-639453495383] Running
	I0429 20:12:03.519055   65980 system_pods.go:61] "kube-apiserver-embed-certs-161370" [6091dd0a-333d-4729-97db-eb7a30755db4] Running
	I0429 20:12:03.519059   65980 system_pods.go:61] "kube-controller-manager-embed-certs-161370" [de70d57c-9329-4d37-a838-9c9ae1e41871] Running
	I0429 20:12:03.519061   65980 system_pods.go:61] "kube-proxy-wq48j" [3b3b23ef-b5b4-4754-bc44-73e1d51a18d7] Running
	I0429 20:12:03.519065   65980 system_pods.go:61] "kube-scheduler-embed-certs-161370" [c7fd3d36-4e35-43b2-93e7-45129464937d] Running
	I0429 20:12:03.519071   65980 system_pods.go:61] "metrics-server-569cc877fc-x2wb6" [cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:12:03.519075   65980 system_pods.go:61] "storage-provisioner" [93e046a1-3867-44e1-8a4f-cf0eba6dfd6b] Running
	I0429 20:12:03.519082   65980 system_pods.go:74] duration metric: took 179.075384ms to wait for pod list to return data ...
	I0429 20:12:03.519089   65980 default_sa.go:34] waiting for default service account to be created ...
	I0429 20:12:03.714354   65980 default_sa.go:45] found service account: "default"
	I0429 20:12:03.714384   65980 default_sa.go:55] duration metric: took 195.287433ms for default service account to be created ...
	I0429 20:12:03.714395   65980 system_pods.go:116] waiting for k8s-apps to be running ...
	I0429 20:12:03.918729   65980 system_pods.go:86] 9 kube-system pods found
	I0429 20:12:03.918755   65980 system_pods.go:89] "coredns-7db6d8ff4d-7z6zv" [422451a2-615d-4bf8-8de8-d5fa5805219f] Running
	I0429 20:12:03.918760   65980 system_pods.go:89] "coredns-7db6d8ff4d-rr6bd" [6d14ff20-6dab-4c02-b91c-0a1e326f1593] Running
	I0429 20:12:03.918765   65980 system_pods.go:89] "etcd-embed-certs-161370" [ab19e79c-18bd-4d0d-b5cf-639453495383] Running
	I0429 20:12:03.918769   65980 system_pods.go:89] "kube-apiserver-embed-certs-161370" [6091dd0a-333d-4729-97db-eb7a30755db4] Running
	I0429 20:12:03.918773   65980 system_pods.go:89] "kube-controller-manager-embed-certs-161370" [de70d57c-9329-4d37-a838-9c9ae1e41871] Running
	I0429 20:12:03.918777   65980 system_pods.go:89] "kube-proxy-wq48j" [3b3b23ef-b5b4-4754-bc44-73e1d51a18d7] Running
	I0429 20:12:03.918780   65980 system_pods.go:89] "kube-scheduler-embed-certs-161370" [c7fd3d36-4e35-43b2-93e7-45129464937d] Running
	I0429 20:12:03.918787   65980 system_pods.go:89] "metrics-server-569cc877fc-x2wb6" [cb0f2f90-66e9-4a7f-ae70-82f2e72aa3b3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0429 20:12:03.918791   65980 system_pods.go:89] "storage-provisioner" [93e046a1-3867-44e1-8a4f-cf0eba6dfd6b] Running
	I0429 20:12:03.918800   65980 system_pods.go:126] duration metric: took 204.399385ms to wait for k8s-apps to be running ...
	I0429 20:12:03.918809   65980 system_svc.go:44] waiting for kubelet service to be running ....
	I0429 20:12:03.918851   65980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:03.937870   65980 system_svc.go:56] duration metric: took 19.05503ms WaitForService to wait for kubelet
	I0429 20:12:03.937892   65980 kubeadm.go:576] duration metric: took 4.169607456s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 20:12:03.937910   65980 node_conditions.go:102] verifying NodePressure condition ...
	I0429 20:12:04.116479   65980 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0429 20:12:04.116504   65980 node_conditions.go:123] node cpu capacity is 2
	I0429 20:12:04.116513   65980 node_conditions.go:105] duration metric: took 178.599246ms to run NodePressure ...
	I0429 20:12:04.116524   65980 start.go:240] waiting for startup goroutines ...
	I0429 20:12:04.116530   65980 start.go:245] waiting for cluster config update ...
	I0429 20:12:04.116540   65980 start.go:254] writing updated cluster config ...
	I0429 20:12:04.116799   65980 ssh_runner.go:195] Run: rm -f paused
	I0429 20:12:04.167803   65980 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0429 20:12:04.169861   65980 out.go:177] * Done! kubectl is now configured to use "embed-certs-161370" cluster and "default" namespace by default
	I0429 20:12:09.853929   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:12:09.854036   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:12:09.856141   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:12:09.856215   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:12:09.856314   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:12:09.856435   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:12:09.856529   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:12:09.856638   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:12:09.858658   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:12:09.858759   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:12:09.858821   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:12:09.858914   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:12:09.858967   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:12:09.859049   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:12:09.859118   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:12:09.859197   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:12:09.859311   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:12:09.859435   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:12:09.859548   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:12:09.859605   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:12:09.859678   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:12:09.859766   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:12:09.859856   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:12:09.859947   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:12:09.860025   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:12:09.860149   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:12:09.860228   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:12:09.860289   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:12:09.860390   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:12:09.862098   66615 out.go:204]   - Booting up control plane ...
	I0429 20:12:09.862211   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:12:09.862298   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:12:09.862360   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:12:09.862484   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:12:09.862720   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:12:09.862794   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:12:09.862882   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863117   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863244   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863470   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863544   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.863814   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.863895   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864144   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864223   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:12:09.864393   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:12:09.864408   66615 kubeadm.go:309] 
	I0429 20:12:09.864473   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:12:09.864526   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:12:09.864543   66615 kubeadm.go:309] 
	I0429 20:12:09.864589   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:12:09.864638   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:12:09.864779   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:12:09.864789   66615 kubeadm.go:309] 
	I0429 20:12:09.864911   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:12:09.864971   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:12:09.865026   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:12:09.865033   66615 kubeadm.go:309] 
	I0429 20:12:09.865150   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:12:09.865228   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:12:09.865241   66615 kubeadm.go:309] 
	I0429 20:12:09.865404   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:12:09.865538   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:12:09.865651   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:12:09.865755   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:12:09.865828   66615 kubeadm.go:309] 
	W0429 20:12:09.865940   66615 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0429 20:12:09.866027   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0429 20:12:10.987703   66615 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.121642991s)
	I0429 20:12:10.987802   66615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 20:12:11.007295   66615 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 20:12:11.020772   66615 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 20:12:11.020790   66615 kubeadm.go:156] found existing configuration files:
	
	I0429 20:12:11.020838   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 20:12:11.033334   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 20:12:11.033405   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 20:12:11.044565   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 20:12:11.057087   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 20:12:11.057143   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 20:12:11.069908   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.082866   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 20:12:11.082920   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 20:12:11.096659   66615 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 20:12:11.110106   66615 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 20:12:11.110166   66615 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 20:12:11.124952   66615 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 20:12:11.396252   66615 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 20:14:07.831448   66615 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0429 20:14:07.831556   66615 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0429 20:14:07.833111   66615 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0429 20:14:07.833179   66615 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 20:14:07.833288   66615 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 20:14:07.833421   66615 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 20:14:07.833530   66615 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 20:14:07.833616   66615 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 20:14:07.835518   66615 out.go:204]   - Generating certificates and keys ...
	I0429 20:14:07.835623   66615 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 20:14:07.835703   66615 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 20:14:07.835776   66615 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0429 20:14:07.835839   66615 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0429 20:14:07.835893   66615 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0429 20:14:07.835957   66615 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0429 20:14:07.836039   66615 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0429 20:14:07.836129   66615 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0429 20:14:07.836238   66615 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0429 20:14:07.836350   66615 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0429 20:14:07.836394   66615 kubeadm.go:309] [certs] Using the existing "sa" key
	I0429 20:14:07.836441   66615 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 20:14:07.836488   66615 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 20:14:07.836559   66615 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 20:14:07.836637   66615 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 20:14:07.836683   66615 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 20:14:07.836778   66615 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 20:14:07.836854   66615 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 20:14:07.836895   66615 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 20:14:07.836950   66615 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 20:14:07.838553   66615 out.go:204]   - Booting up control plane ...
	I0429 20:14:07.838635   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 20:14:07.838718   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 20:14:07.838836   66615 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 20:14:07.838918   66615 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 20:14:07.839069   66615 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0429 20:14:07.839126   66615 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0429 20:14:07.839180   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839369   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839450   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.839654   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.839779   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840008   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840076   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840322   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840380   66615 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0429 20:14:07.840571   66615 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0429 20:14:07.840594   66615 kubeadm.go:309] 
	I0429 20:14:07.840637   66615 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0429 20:14:07.840673   66615 kubeadm.go:309] 		timed out waiting for the condition
	I0429 20:14:07.840682   66615 kubeadm.go:309] 
	I0429 20:14:07.840715   66615 kubeadm.go:309] 	This error is likely caused by:
	I0429 20:14:07.840745   66615 kubeadm.go:309] 		- The kubelet is not running
	I0429 20:14:07.840844   66615 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0429 20:14:07.840857   66615 kubeadm.go:309] 
	I0429 20:14:07.840969   66615 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0429 20:14:07.841022   66615 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0429 20:14:07.841073   66615 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0429 20:14:07.841083   66615 kubeadm.go:309] 
	I0429 20:14:07.841184   66615 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0429 20:14:07.841315   66615 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0429 20:14:07.841325   66615 kubeadm.go:309] 
	I0429 20:14:07.841454   66615 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0429 20:14:07.841550   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0429 20:14:07.841632   66615 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0429 20:14:07.841697   66615 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0429 20:14:07.841760   66615 kubeadm.go:393] duration metric: took 8m1.501853767s to StartCluster
	I0429 20:14:07.841781   66615 kubeadm.go:309] 
	I0429 20:14:07.841800   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0429 20:14:07.841853   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0429 20:14:07.898194   66615 cri.go:89] found id: ""
	I0429 20:14:07.898227   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.898237   66615 logs.go:278] No container was found matching "kube-apiserver"
	I0429 20:14:07.898244   66615 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0429 20:14:07.898316   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0429 20:14:07.938873   66615 cri.go:89] found id: ""
	I0429 20:14:07.938903   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.938914   66615 logs.go:278] No container was found matching "etcd"
	I0429 20:14:07.938921   66615 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0429 20:14:07.938979   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0429 20:14:07.980523   66615 cri.go:89] found id: ""
	I0429 20:14:07.980551   66615 logs.go:276] 0 containers: []
	W0429 20:14:07.980559   66615 logs.go:278] No container was found matching "coredns"
	I0429 20:14:07.980565   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0429 20:14:07.980612   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0429 20:14:08.021334   66615 cri.go:89] found id: ""
	I0429 20:14:08.021366   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.021377   66615 logs.go:278] No container was found matching "kube-scheduler"
	I0429 20:14:08.021389   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0429 20:14:08.021446   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0429 20:14:08.060598   66615 cri.go:89] found id: ""
	I0429 20:14:08.060636   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.060648   66615 logs.go:278] No container was found matching "kube-proxy"
	I0429 20:14:08.060655   66615 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0429 20:14:08.060716   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0429 20:14:08.101689   66615 cri.go:89] found id: ""
	I0429 20:14:08.101715   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.101723   66615 logs.go:278] No container was found matching "kube-controller-manager"
	I0429 20:14:08.101729   66615 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0429 20:14:08.101786   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0429 20:14:08.143295   66615 cri.go:89] found id: ""
	I0429 20:14:08.143333   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.143344   66615 logs.go:278] No container was found matching "kindnet"
	I0429 20:14:08.143351   66615 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0429 20:14:08.143408   66615 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0429 20:14:08.190555   66615 cri.go:89] found id: ""
	I0429 20:14:08.190585   66615 logs.go:276] 0 containers: []
	W0429 20:14:08.190597   66615 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0429 20:14:08.190609   66615 logs.go:123] Gathering logs for container status ...
	I0429 20:14:08.190624   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0429 20:14:08.251830   66615 logs.go:123] Gathering logs for kubelet ...
	I0429 20:14:08.251870   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0429 20:14:08.306512   66615 logs.go:123] Gathering logs for dmesg ...
	I0429 20:14:08.306554   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0429 20:14:08.323258   66615 logs.go:123] Gathering logs for describe nodes ...
	I0429 20:14:08.323283   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0429 20:14:08.405539   66615 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0429 20:14:08.405568   66615 logs.go:123] Gathering logs for CRI-O ...
	I0429 20:14:08.405583   66615 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0429 20:14:08.514288   66615 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0429 20:14:08.514344   66615 out.go:239] * 
	W0429 20:14:08.514431   66615 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.514465   66615 out.go:239] * 
	W0429 20:14:08.515399   66615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 20:14:08.518578   66615 out.go:177] 
	W0429 20:14:08.519725   66615 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0429 20:14:08.519782   66615 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0429 20:14:08.519816   66615 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0429 20:14:08.521068   66615 out.go:177] 
	
	
	==> CRI-O <==
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.083260162Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422343083231291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d448a47b-8248-4551-9eb8-65d3fd9fbadf name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.083851084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f95b5e9f-0f35-491a-8bca-e178c11d72c9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.083923642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f95b5e9f-0f35-491a-8bca-e178c11d72c9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.084067209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f95b5e9f-0f35-491a-8bca-e178c11d72c9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.122811439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24e8a9eb-5dcf-4fe4-a848-2c7bd12237ee name=/runtime.v1.RuntimeService/Version
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.122926336Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24e8a9eb-5dcf-4fe4-a848-2c7bd12237ee name=/runtime.v1.RuntimeService/Version
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.124665329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=322ca2c1-6f30-4238-8eab-713995ac1c81 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.125168164Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422343125142592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=322ca2c1-6f30-4238-8eab-713995ac1c81 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.125851388Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c930e1cc-a9ac-4b31-aecc-45887d3e7b8f name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.125915983Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c930e1cc-a9ac-4b31-aecc-45887d3e7b8f name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.126016885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c930e1cc-a9ac-4b31-aecc-45887d3e7b8f name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.164650903Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d203f11-24f9-442d-afd6-a54efa9f2f59 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.164755834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d203f11-24f9-442d-afd6-a54efa9f2f59 name=/runtime.v1.RuntimeService/Version
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.166533931Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99a3e7b9-c1f9-4ffd-bf18-27de13287cdc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.167019812Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422343166915667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99a3e7b9-c1f9-4ffd-bf18-27de13287cdc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.167617024Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c5b8142-cc6e-47d6-bd77-7029d7cebccb name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.167685795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c5b8142-cc6e-47d6-bd77-7029d7cebccb name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.167717999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7c5b8142-cc6e-47d6-bd77-7029d7cebccb name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.207733310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d3390ef-e6b3-4d3e-b740-3ccf54a1f5fd name=/runtime.v1.RuntimeService/Version
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.207851979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d3390ef-e6b3-4d3e-b740-3ccf54a1f5fd name=/runtime.v1.RuntimeService/Version
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.210286819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b06c1fc2-d56d-4d18-bdc8-32fae1fe049a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.210717049Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1714422343210685282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b06c1fc2-d56d-4d18-bdc8-32fae1fe049a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.211249299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5776733a-8c8f-4de4-b8be-2995333519ac name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.211343392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5776733a-8c8f-4de4-b8be-2995333519ac name=/runtime.v1.RuntimeService/ListContainers
	Apr 29 20:25:43 old-k8s-version-919612 crio[646]: time="2024-04-29 20:25:43.211390629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5776733a-8c8f-4de4-b8be-2995333519ac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr29 20:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052789] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046548] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.710890] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.577556] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.715602] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.063950] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.064197] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076631] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.231967] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.183078] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.301851] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[Apr29 20:06] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +0.070853] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.488329] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[ +10.271232] kauditd_printk_skb: 46 callbacks suppressed
	[Apr29 20:10] systemd-fstab-generator[4978]: Ignoring "noauto" option for root device
	[Apr29 20:12] systemd-fstab-generator[5259]: Ignoring "noauto" option for root device
	[  +0.075523] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:25:43 up 20 min,  0 users,  load average: 0.02, 0.06, 0.05
	Linux old-k8s-version-919612 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]: net.(*sysDialer).dialSerial(0xc000962d80, 0x4f7fe40, 0xc000b9ed80, 0xc000b8cd60, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]:         /usr/local/go/src/net/dial.go:548 +0x152
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]: net.(*Dialer).DialContext(0xc0001f2060, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009fd170, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0009224a0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009fd170, 0x24, 0x60, 0x7f777001e7b8, 0x118, ...)
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]: net/http.(*Transport).dial(0xc0004cc640, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc0009fd170, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]: net/http.(*Transport).dialConn(0xc0004cc640, 0x4f7fe00, 0xc000120018, 0x0, 0xc000478480, 0x5, 0xc0009fd170, 0x24, 0x0, 0xc000b985a0, ...)
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]: net/http.(*Transport).dialConnFor(0xc0004cc640, 0xc0009764d0)
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]: created by net/http.(*Transport).queueForDial
	Apr 29 20:25:41 old-k8s-version-919612 kubelet[6751]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 29 20:25:41 old-k8s-version-919612 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 29 20:25:41 old-k8s-version-919612 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 29 20:25:42 old-k8s-version-919612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 139.
	Apr 29 20:25:42 old-k8s-version-919612 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 29 20:25:42 old-k8s-version-919612 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 29 20:25:42 old-k8s-version-919612 kubelet[6778]: I0429 20:25:42.165684    6778 server.go:416] Version: v1.20.0
	Apr 29 20:25:42 old-k8s-version-919612 kubelet[6778]: I0429 20:25:42.167126    6778 server.go:837] Client rotation is on, will bootstrap in background
	Apr 29 20:25:42 old-k8s-version-919612 kubelet[6778]: I0429 20:25:42.172475    6778 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 29 20:25:42 old-k8s-version-919612 kubelet[6778]: I0429 20:25:42.174996    6778 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 29 20:25:42 old-k8s-version-919612 kubelet[6778]: W0429 20:25:42.175049    6778 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-919612 -n old-k8s-version-919612
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 2 (242.715324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-919612" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (149.27s)

                                                
                                    

Test pass (243/311)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 54.75
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0/json-events 14.98
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.14
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.58
22 TestOffline 87.65
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 219.15
29 TestAddons/parallel/Registry 19.02
31 TestAddons/parallel/InspektorGadget 12.83
33 TestAddons/parallel/HelmTiller 16.03
35 TestAddons/parallel/CSI 55.55
36 TestAddons/parallel/Headlamp 13.25
37 TestAddons/parallel/CloudSpanner 6.76
38 TestAddons/parallel/LocalPath 14.65
39 TestAddons/parallel/NvidiaDevicePlugin 6.59
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.12
45 TestCertOptions 72
46 TestCertExpiration 287.31
48 TestForceSystemdFlag 81.62
49 TestForceSystemdEnv 101.31
51 TestKVMDriverInstallOrUpdate 16.7
55 TestErrorSpam/setup 44.98
56 TestErrorSpam/start 0.36
57 TestErrorSpam/status 0.8
58 TestErrorSpam/pause 1.68
59 TestErrorSpam/unpause 1.73
60 TestErrorSpam/stop 4.84
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 98.2
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 51.67
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.24
72 TestFunctional/serial/CacheCmd/cache/add_local 2.3
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.12
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 57.97
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.63
83 TestFunctional/serial/LogsFileCmd 1.76
84 TestFunctional/serial/InvalidService 40.27
86 TestFunctional/parallel/ConfigCmd 0.44
87 TestFunctional/parallel/DashboardCmd 36.56
88 TestFunctional/parallel/DryRun 0.32
89 TestFunctional/parallel/InternationalLanguage 0.17
90 TestFunctional/parallel/StatusCmd 1.04
94 TestFunctional/parallel/ServiceCmdConnect 10.49
95 TestFunctional/parallel/AddonsCmd 0.15
96 TestFunctional/parallel/PersistentVolumeClaim 55.46
98 TestFunctional/parallel/SSHCmd 0.55
99 TestFunctional/parallel/CpCmd 1.51
100 TestFunctional/parallel/MySQL 27.45
101 TestFunctional/parallel/FileSync 0.28
102 TestFunctional/parallel/CertSync 1.4
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
110 TestFunctional/parallel/License 0.64
111 TestFunctional/parallel/ServiceCmd/DeployApp 11.26
121 TestFunctional/parallel/Version/short 0.06
122 TestFunctional/parallel/Version/components 0.69
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
124 TestFunctional/parallel/ProfileCmd/profile_list 0.35
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
126 TestFunctional/parallel/MountCmd/any-port 8.66
127 TestFunctional/parallel/ServiceCmd/List 0.32
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
130 TestFunctional/parallel/ServiceCmd/Format 0.37
131 TestFunctional/parallel/ServiceCmd/URL 0.33
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.38
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
139 TestFunctional/parallel/ImageCommands/ImageBuild 5.08
140 TestFunctional/parallel/ImageCommands/Setup 2.12
141 TestFunctional/parallel/MountCmd/specific-port 2.15
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.66
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.33
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.03
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.68
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 283.07
157 TestMultiControlPlane/serial/DeployApp 8.38
158 TestMultiControlPlane/serial/PingHostFromPods 1.41
159 TestMultiControlPlane/serial/AddWorkerNode 47.02
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.58
162 TestMultiControlPlane/serial/CopyFile 13.92
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.53
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.43
168 TestMultiControlPlane/serial/DeleteSecondaryNode 17.66
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
171 TestMultiControlPlane/serial/RestartCluster 293.35
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.41
173 TestMultiControlPlane/serial/AddSecondaryNode 79.36
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.57
178 TestJSONOutput/start/Command 58.54
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.77
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.7
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.42
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.21
206 TestMainNoArgs 0.06
207 TestMinikubeProfile 100.52
210 TestMountStart/serial/StartWithMountFirst 27.46
211 TestMountStart/serial/VerifyMountFirst 0.39
212 TestMountStart/serial/StartWithMountSecond 28.92
213 TestMountStart/serial/VerifyMountSecond 0.4
214 TestMountStart/serial/DeleteFirst 0.68
215 TestMountStart/serial/VerifyMountPostDelete 0.4
216 TestMountStart/serial/Stop 2.29
217 TestMountStart/serial/RestartStopped 23.57
218 TestMountStart/serial/VerifyMountPostStop 0.39
221 TestMultiNode/serial/FreshStart2Nodes 109.06
222 TestMultiNode/serial/DeployApp2Nodes 5.79
223 TestMultiNode/serial/PingHostFrom2Pods 0.91
224 TestMultiNode/serial/AddNode 41.27
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.23
227 TestMultiNode/serial/CopyFile 7.52
228 TestMultiNode/serial/StopNode 2.46
229 TestMultiNode/serial/StartAfterStop 32.22
231 TestMultiNode/serial/DeleteNode 2.31
233 TestMultiNode/serial/RestartMultiNode 178.9
234 TestMultiNode/serial/ValidateNameConflict 45.76
241 TestScheduledStopUnix 115.1
245 TestRunningBinaryUpgrade 191.53
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
251 TestNoKubernetes/serial/StartWithK8s 126.93
252 TestStoppedBinaryUpgrade/Setup 2.72
253 TestStoppedBinaryUpgrade/Upgrade 131.11
254 TestNoKubernetes/serial/StartWithStopK8s 55.86
255 TestNoKubernetes/serial/Start 43.7
256 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
265 TestPause/serial/Start 63.11
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
267 TestNoKubernetes/serial/ProfileList 1.94
268 TestNoKubernetes/serial/Stop 1.6
269 TestNoKubernetes/serial/StartNoArgs 42.34
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
278 TestNetworkPlugins/group/false 5.09
286 TestStartStop/group/no-preload/serial/FirstStart 144.92
288 TestStartStop/group/embed-certs/serial/FirstStart 94.7
289 TestStartStop/group/embed-certs/serial/DeployApp 11.3
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.05
292 TestStartStop/group/no-preload/serial/DeployApp 10.28
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 65.65
297 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.32
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
303 TestStartStop/group/embed-certs/serial/SecondStart 686.76
305 TestStartStop/group/no-preload/serial/SecondStart 605.37
306 TestStartStop/group/old-k8s-version/serial/Stop 5.3
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 492.02
320 TestStartStop/group/newest-cni/serial/FirstStart 59.16
321 TestNetworkPlugins/group/auto/Start 73.72
322 TestNetworkPlugins/group/kindnet/Start 115.77
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.66
325 TestStartStop/group/newest-cni/serial/Stop 7.41
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
327 TestStartStop/group/newest-cni/serial/SecondStart 54.95
328 TestNetworkPlugins/group/auto/KubeletFlags 0.27
329 TestNetworkPlugins/group/auto/NetCatPod 11.32
330 TestNetworkPlugins/group/auto/DNS 0.16
331 TestNetworkPlugins/group/auto/Localhost 0.16
332 TestNetworkPlugins/group/auto/HairPin 0.14
333 TestNetworkPlugins/group/calico/Start 95.65
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
337 TestStartStop/group/newest-cni/serial/Pause 2.76
338 TestNetworkPlugins/group/custom-flannel/Start 107.21
339 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
340 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
341 TestNetworkPlugins/group/kindnet/NetCatPod 12.27
342 TestNetworkPlugins/group/kindnet/DNS 0.16
343 TestNetworkPlugins/group/kindnet/Localhost 0.19
344 TestNetworkPlugins/group/kindnet/HairPin 0.15
345 TestNetworkPlugins/group/enable-default-cni/Start 64.77
346 TestNetworkPlugins/group/flannel/Start 110.27
347 TestNetworkPlugins/group/calico/ControllerPod 6.01
348 TestNetworkPlugins/group/calico/KubeletFlags 0.34
349 TestNetworkPlugins/group/calico/NetCatPod 12.34
350 TestNetworkPlugins/group/calico/DNS 0.18
351 TestNetworkPlugins/group/calico/Localhost 0.15
352 TestNetworkPlugins/group/calico/HairPin 0.15
353 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
354 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
355 TestNetworkPlugins/group/custom-flannel/DNS 0.26
356 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
357 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
358 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
359 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.3
360 TestNetworkPlugins/group/bridge/Start 102.95
361 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
364 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
366 TestNetworkPlugins/group/flannel/NetCatPod 11.22
367 TestNetworkPlugins/group/flannel/DNS 0.16
368 TestNetworkPlugins/group/flannel/Localhost 0.13
369 TestNetworkPlugins/group/flannel/HairPin 0.15
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
371 TestNetworkPlugins/group/bridge/NetCatPod 10.23
372 TestNetworkPlugins/group/bridge/DNS 0.17
373 TestNetworkPlugins/group/bridge/Localhost 0.14
374 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (54.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-513783 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-513783 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (54.751937312s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (54.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-513783
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-513783: exit status 85 (71.212708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-513783 | jenkins | v1.33.0 | 29 Apr 24 18:39 UTC |          |
	|         | -p download-only-513783        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 18:39:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 18:39:10.451666   15136 out.go:291] Setting OutFile to fd 1 ...
	I0429 18:39:10.451925   15136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:39:10.451934   15136 out.go:304] Setting ErrFile to fd 2...
	I0429 18:39:10.451938   15136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:39:10.452130   15136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	W0429 18:39:10.452257   15136 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18774-7754/.minikube/config/config.json: open /home/jenkins/minikube-integration/18774-7754/.minikube/config/config.json: no such file or directory
	I0429 18:39:10.452881   15136 out.go:298] Setting JSON to true
	I0429 18:39:10.453727   15136 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1248,"bootTime":1714414702,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 18:39:10.453789   15136 start.go:139] virtualization: kvm guest
	I0429 18:39:10.456590   15136 out.go:97] [download-only-513783] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 18:39:10.458025   15136 out.go:169] MINIKUBE_LOCATION=18774
	W0429 18:39:10.456690   15136 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball: no such file or directory
	I0429 18:39:10.456719   15136 notify.go:220] Checking for updates...
	I0429 18:39:10.460813   15136 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 18:39:10.462140   15136 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:39:10.463388   15136 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:39:10.464563   15136 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0429 18:39:10.466599   15136 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 18:39:10.466848   15136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 18:39:10.569571   15136 out.go:97] Using the kvm2 driver based on user configuration
	I0429 18:39:10.569603   15136 start.go:297] selected driver: kvm2
	I0429 18:39:10.569609   15136 start.go:901] validating driver "kvm2" against <nil>
	I0429 18:39:10.569930   15136 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:39:10.570053   15136 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 18:39:10.585030   15136 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 18:39:10.585121   15136 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 18:39:10.585620   15136 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0429 18:39:10.585788   15136 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 18:39:10.585867   15136 cni.go:84] Creating CNI manager for ""
	I0429 18:39:10.585883   15136 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 18:39:10.585893   15136 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 18:39:10.585995   15136 start.go:340] cluster config:
	{Name:download-only-513783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-513783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:39:10.586222   15136 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:39:10.588060   15136 out.go:97] Downloading VM boot image ...
	I0429 18:39:10.588102   15136 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18774-7754/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 18:39:21.215380   15136 out.go:97] Starting "download-only-513783" primary control-plane node in "download-only-513783" cluster
	I0429 18:39:21.215401   15136 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 18:39:21.329859   15136 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0429 18:39:21.329890   15136 cache.go:56] Caching tarball of preloaded images
	I0429 18:39:21.330118   15136 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 18:39:21.332196   15136 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 18:39:21.332226   15136 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0429 18:39:21.444013   15136 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0429 18:39:37.875301   15136 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0429 18:39:37.875400   15136 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0429 18:39:38.783810   15136 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0429 18:39:38.784172   15136 profile.go:143] Saving config to /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/download-only-513783/config.json ...
	I0429 18:39:38.784202   15136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/download-only-513783/config.json: {Name:mk6928af0a893306e2e25ca1725c1f375afa7f0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 18:39:38.784352   15136 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0429 18:39:38.784511   15136 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18774-7754/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-513783 host does not exist
	  To start a cluster, run: "minikube start -p download-only-513783"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-513783
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (14.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-450771 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-450771 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.980495084s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (14.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-450771
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-450771: exit status 85 (70.390368ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-513783 | jenkins | v1.33.0 | 29 Apr 24 18:39 UTC |                     |
	|         | -p download-only-513783        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| delete  | -p download-only-513783        | download-only-513783 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC | 29 Apr 24 18:40 UTC |
	| start   | -o=json --download-only        | download-only-450771 | jenkins | v1.33.0 | 29 Apr 24 18:40 UTC |                     |
	|         | -p download-only-450771        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 18:40:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 18:40:05.550452   15471 out.go:291] Setting OutFile to fd 1 ...
	I0429 18:40:05.550608   15471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:40:05.550618   15471 out.go:304] Setting ErrFile to fd 2...
	I0429 18:40:05.550625   15471 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:40:05.550862   15471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 18:40:05.551449   15471 out.go:298] Setting JSON to true
	I0429 18:40:05.552261   15471 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1303,"bootTime":1714414702,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 18:40:05.552319   15471 start.go:139] virtualization: kvm guest
	I0429 18:40:05.554469   15471 out.go:97] [download-only-450771] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 18:40:05.555886   15471 out.go:169] MINIKUBE_LOCATION=18774
	I0429 18:40:05.554646   15471 notify.go:220] Checking for updates...
	I0429 18:40:05.558658   15471 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 18:40:05.560094   15471 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:40:05.561579   15471 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:40:05.562923   15471 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0429 18:40:05.565251   15471 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 18:40:05.565487   15471 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 18:40:05.595296   15471 out.go:97] Using the kvm2 driver based on user configuration
	I0429 18:40:05.595320   15471 start.go:297] selected driver: kvm2
	I0429 18:40:05.595325   15471 start.go:901] validating driver "kvm2" against <nil>
	I0429 18:40:05.595624   15471 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:40:05.595706   15471 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18774-7754/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0429 18:40:05.610906   15471 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0429 18:40:05.610959   15471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 18:40:05.611394   15471 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0429 18:40:05.611531   15471 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 18:40:05.611579   15471 cni.go:84] Creating CNI manager for ""
	I0429 18:40:05.611591   15471 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0429 18:40:05.611600   15471 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 18:40:05.611647   15471 start.go:340] cluster config:
	{Name:download-only-450771 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-450771 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:40:05.611733   15471 iso.go:125] acquiring lock: {Name:mk7ac5bbcadd939eb992cb25f14a8ea1a46dc7aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 18:40:05.613487   15471 out.go:97] Starting "download-only-450771" primary control-plane node in "download-only-450771" cluster
	I0429 18:40:05.613510   15471 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 18:40:06.158876   15471 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0429 18:40:06.158917   15471 cache.go:56] Caching tarball of preloaded images
	I0429 18:40:06.159078   15471 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0429 18:40:06.161085   15471 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0429 18:40:06.161102   15471 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0429 18:40:06.710549   15471 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5927bd9d05f26d08fc05540d1d92e5d8 -> /home/jenkins/minikube-integration/18774-7754/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-450771 host does not exist
	  To start a cluster, run: "minikube start -p download-only-450771"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-450771
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-527606 --alsologtostderr --binary-mirror http://127.0.0.1:33939 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-527606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-527606
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (87.65s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-529277 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-529277 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m26.636057357s)
helpers_test.go:175: Cleaning up "offline-crio-529277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-529277
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-529277: (1.009712356s)
--- PASS: TestOffline (87.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-412183
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-412183: exit status 85 (62.14447ms)

                                                
                                                
-- stdout --
	* Profile "addons-412183" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-412183"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-412183
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-412183: exit status 85 (61.95717ms)

                                                
                                                
-- stdout --
	* Profile "addons-412183" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-412183"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (219.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-412183 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-412183 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m39.154364123s)
--- PASS: TestAddons/Setup (219.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 29.024606ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vkwz2" [cbb1f320-7afd-403e-96b8-4e34ed9b2d78] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006277447s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fvvc6" [8835c731-1707-4dca-9621-b9f326ad0cd2] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00483631s
addons_test.go:340: (dbg) Run:  kubectl --context addons-412183 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-412183 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-412183 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.053933593s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 ip
2024/04/29 18:44:19 [DEBUG] GET http://192.168.39.105:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.02s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4fdlj" [8033cadc-2313-4069-ba4d-ef1d6d16bb13] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005431777s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-412183
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-412183: (6.819130096s)
--- PASS: TestAddons/parallel/InspektorGadget (12.83s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.03s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 2.343538ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-424j5" [d9343705-996d-40f7-9597-aba3801d8af1] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005795229s
addons_test.go:473: (dbg) Run:  kubectl --context addons-412183 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-412183 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.309177725s)
addons_test.go:478: kubectl --context addons-412183 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: error stream protocol error: unknown error
addons_test.go:473: (dbg) Run:  kubectl --context addons-412183 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-412183 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.473675968s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (16.03s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 30.954575ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-412183 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-412183 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bf5cc23b-3aeb-4852-aed7-25fd9305addd] Pending
helpers_test.go:344: "task-pv-pod" [bf5cc23b-3aeb-4852-aed7-25fd9305addd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bf5cc23b-3aeb-4852-aed7-25fd9305addd] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004266364s
addons_test.go:584: (dbg) Run:  kubectl --context addons-412183 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-412183 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-412183 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-412183 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-412183 delete pod task-pv-pod: (1.684335273s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-412183 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-412183 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-412183 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0e5bf02a-2ed4-49f2-83df-c9026318567c] Pending
helpers_test.go:344: "task-pv-pod-restore" [0e5bf02a-2ed4-49f2-83df-c9026318567c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0e5bf02a-2ed4-49f2-83df-c9026318567c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00653738s
addons_test.go:626: (dbg) Run:  kubectl --context addons-412183 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-412183 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-412183 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-412183 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.989696816s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-412183 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-412183 --alsologtostderr -v=1: (1.244824706s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-58zjw" [37a072c8-8aaf-4735-86a9-4bd44444005d] Pending
helpers_test.go:344: "headlamp-7559bf459f-58zjw" [37a072c8-8aaf-4735-86a9-4bd44444005d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-58zjw" [37a072c8-8aaf-4735-86a9-4bd44444005d] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004909027s
--- PASS: TestAddons/parallel/Headlamp (13.25s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-bcpb4" [b9f7250e-8d51-43c6-9ad6-d0c7bd6334d6] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004901806s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-412183
--- PASS: TestAddons/parallel/CloudSpanner (6.76s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (14.65s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-412183 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-412183 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-412183 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7bd533d2-ff03-488d-b42f-1b3f70582444] Pending
helpers_test.go:344: "test-local-path" [7bd533d2-ff03-488d-b42f-1b3f70582444] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7bd533d2-ff03-488d-b42f-1b3f70582444] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7bd533d2-ff03-488d-b42f-1b3f70582444] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004607203s
addons_test.go:891: (dbg) Run:  kubectl --context addons-412183 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 ssh "cat /opt/local-path-provisioner/pvc-44e4f926-cc71-46f4-8659-1c0700bd3215_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-412183 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-412183 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-412183 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (14.65s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bdlx2" [ae8e59a0-c1bc-4229-a163-f1999243d24f] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004558377s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-412183
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-5b87k" [695334d7-ed81-4e1f-8805-0b308e61e51f] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005952242s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-412183 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-412183 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (72s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-437743 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-437743 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m10.537366137s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-437743 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-437743 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-437743 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-437743" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-437743
--- PASS: TestCertOptions (72.00s)

                                                
                                    
x
+
TestCertExpiration (287.31s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-509508 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-509508 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m5.835324333s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-509508 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-509508 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.484006103s)
helpers_test.go:175: Cleaning up "cert-expiration-509508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-509508
--- PASS: TestCertExpiration (287.31s)

                                                
                                    
x
+
TestForceSystemdFlag (81.62s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-090341 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-090341 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m20.575191087s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-090341 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-090341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-090341
--- PASS: TestForceSystemdFlag (81.62s)

                                                
                                    
x
+
TestForceSystemdEnv (101.31s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-819356 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-819356 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m40.522589914s)
helpers_test.go:175: Cleaning up "force-systemd-env-819356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-819356
--- PASS: TestForceSystemdEnv (101.31s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (16.7s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (16.70s)

                                                
                                    
x
+
TestErrorSpam/setup (44.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-353106 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-353106 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-353106 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-353106 --driver=kvm2  --container-runtime=crio: (44.976563346s)
--- PASS: TestErrorSpam/setup (44.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 pause
--- PASS: TestErrorSpam/pause (1.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (4.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 stop: (2.290483719s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 stop: (1.148480571s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-353106 --log_dir /tmp/nospam-353106 stop: (1.397601113s)
--- PASS: TestErrorSpam/stop (4.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18774-7754/.minikube/files/etc/test/nested/copy/15124/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.2s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-828689 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0429 18:54:00.893571   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:00.899298   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:00.909607   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:00.929910   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:00.970291   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:01.050659   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:01.211156   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:01.531720   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:02.172750   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:03.453253   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:06.014114   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:11.134842   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:21.375832   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:54:41.856440   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-828689 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m38.203666123s)
--- PASS: TestFunctional/serial/StartWithProxy (98.20s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (51.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-828689 --alsologtostderr -v=8
E0429 18:55:22.816958   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-828689 --alsologtostderr -v=8: (51.672621686s)
functional_test.go:659: soft start took 51.673291333s for "functional-828689" cluster.
--- PASS: TestFunctional/serial/SoftStart (51.67s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-828689 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-828689 cache add registry.k8s.io/pause:3.1: (1.015466773s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-828689 cache add registry.k8s.io/pause:3.3: (1.066015885s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-828689 cache add registry.k8s.io/pause:latest: (1.153966096s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-828689 /tmp/TestFunctionalserialCacheCmdcacheadd_local4060440716/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 cache add minikube-local-cache-test:functional-828689
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-828689 cache add minikube-local-cache-test:functional-828689: (1.940139674s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 cache delete minikube-local-cache-test:functional-828689
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-828689
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828689 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (222.836595ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 kubectl -- --context functional-828689 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-828689 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (57.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-828689 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0429 18:56:44.740561   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-828689 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (57.973935703s)
functional_test.go:757: restart took 57.974032716s for "functional-828689" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (57.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-828689 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-828689 logs: (1.630637206s)
--- PASS: TestFunctional/serial/LogsCmd (1.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 logs --file /tmp/TestFunctionalserialLogsFileCmd2307577210/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-828689 logs --file /tmp/TestFunctionalserialLogsFileCmd2307577210/001/logs.txt: (1.75683808s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (40.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-828689 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-828689
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-828689: exit status 115 (289.443622ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.72:30709 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-828689 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-828689 delete -f testdata/invalidsvc.yaml: (36.71924894s)
--- PASS: TestFunctional/serial/InvalidService (40.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828689 config get cpus: exit status 14 (87.512854ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828689 config get cpus: exit status 14 (63.416671ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (36.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-828689 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-828689 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25584: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (36.56s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-828689 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-828689 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (163.300986ms)

                                                
                                                
-- stdout --
	* [functional-828689] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 18:58:01.310521   24975 out.go:291] Setting OutFile to fd 1 ...
	I0429 18:58:01.310676   24975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:58:01.310687   24975 out.go:304] Setting ErrFile to fd 2...
	I0429 18:58:01.310694   24975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:58:01.310924   24975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 18:58:01.311440   24975 out.go:298] Setting JSON to false
	I0429 18:58:01.312402   24975 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2379,"bootTime":1714414702,"procs":260,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 18:58:01.312457   24975 start.go:139] virtualization: kvm guest
	I0429 18:58:01.315847   24975 out.go:177] * [functional-828689] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 18:58:01.317386   24975 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 18:58:01.317363   24975 notify.go:220] Checking for updates...
	I0429 18:58:01.318761   24975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 18:58:01.320195   24975 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:58:01.321460   24975 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:58:01.322741   24975 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 18:58:01.326875   24975 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 18:58:01.328883   24975 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:58:01.329471   24975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:58:01.329533   24975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:58:01.347618   24975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41227
	I0429 18:58:01.349108   24975 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:58:01.349699   24975 main.go:141] libmachine: Using API Version  1
	I0429 18:58:01.349732   24975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:58:01.349995   24975 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:58:01.350178   24975 main.go:141] libmachine: (functional-828689) Calling .DriverName
	I0429 18:58:01.350389   24975 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 18:58:01.350646   24975 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:58:01.350674   24975 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:58:01.366930   24975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0429 18:58:01.367381   24975 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:58:01.368228   24975 main.go:141] libmachine: Using API Version  1
	I0429 18:58:01.368249   24975 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:58:01.368532   24975 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:58:01.368707   24975 main.go:141] libmachine: (functional-828689) Calling .DriverName
	I0429 18:58:01.403437   24975 out.go:177] * Using the kvm2 driver based on existing profile
	I0429 18:58:01.405254   24975 start.go:297] selected driver: kvm2
	I0429 18:58:01.405273   24975 start.go:901] validating driver "kvm2" against &{Name:functional-828689 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-828689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:58:01.405413   24975 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 18:58:01.408113   24975 out.go:177] 
	W0429 18:58:01.409481   24975 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0429 18:58:01.410891   24975 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-828689 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-828689 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-828689 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (168.20959ms)

                                                
                                                
-- stdout --
	* [functional-828689] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 18:58:01.594944   25086 out.go:291] Setting OutFile to fd 1 ...
	I0429 18:58:01.595191   25086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:58:01.595201   25086 out.go:304] Setting ErrFile to fd 2...
	I0429 18:58:01.595205   25086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 18:58:01.595472   25086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 18:58:01.595913   25086 out.go:298] Setting JSON to false
	I0429 18:58:01.596953   25086 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2380,"bootTime":1714414702,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 18:58:01.597033   25086 start.go:139] virtualization: kvm guest
	I0429 18:58:01.599286   25086 out.go:177] * [functional-828689] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0429 18:58:01.600755   25086 notify.go:220] Checking for updates...
	I0429 18:58:01.600770   25086 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 18:58:01.602129   25086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 18:58:01.603340   25086 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 18:58:01.604816   25086 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 18:58:01.606033   25086 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 18:58:01.607362   25086 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 18:58:01.609192   25086 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 18:58:01.609914   25086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:58:01.609972   25086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:58:01.628552   25086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41575
	I0429 18:58:01.629032   25086 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:58:01.629548   25086 main.go:141] libmachine: Using API Version  1
	I0429 18:58:01.629586   25086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:58:01.629924   25086 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:58:01.630126   25086 main.go:141] libmachine: (functional-828689) Calling .DriverName
	I0429 18:58:01.630390   25086 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 18:58:01.630808   25086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 18:58:01.630862   25086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 18:58:01.649202   25086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38911
	I0429 18:58:01.649717   25086 main.go:141] libmachine: () Calling .GetVersion
	I0429 18:58:01.650239   25086 main.go:141] libmachine: Using API Version  1
	I0429 18:58:01.650265   25086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 18:58:01.650722   25086 main.go:141] libmachine: () Calling .GetMachineName
	I0429 18:58:01.650986   25086 main.go:141] libmachine: (functional-828689) Calling .DriverName
	I0429 18:58:01.690636   25086 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0429 18:58:01.691959   25086 start.go:297] selected driver: kvm2
	I0429 18:58:01.691974   25086 start.go:901] validating driver "kvm2" against &{Name:functional-828689 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-828689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.72 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 18:58:01.692117   25086 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 18:58:01.694462   25086 out.go:177] 
	W0429 18:58:01.695829   25086 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0429 18:58:01.697222   25086 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-828689 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-828689 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-l2r8f" [84730539-4762-45d8-abb0-1fb62e14b6b5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-l2r8f" [84730539-4762-45d8-abb0-1fb62e14b6b5] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004352216s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.72:32116
functional_test.go:1671: http://192.168.39.72:32116: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-l2r8f

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.72:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.72:32116
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (55.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1efc42d3-a0f2-42f2-bbc4-c91bb3d354ae] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005704996s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-828689 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-828689 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-828689 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-828689 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-828689 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d94c5ac2-cbee-4359-9d1b-e8d4af187cf9] Pending
helpers_test.go:344: "sp-pod" [d94c5ac2-cbee-4359-9d1b-e8d4af187cf9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d94c5ac2-cbee-4359-9d1b-e8d4af187cf9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.005528015s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-828689 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-828689 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-828689 delete -f testdata/storage-provisioner/pod.yaml: (4.464809999s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-828689 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5b981a93-8dd8-4fe6-9795-6eb1c9a1f16d] Pending
helpers_test.go:344: "sp-pod" [5b981a93-8dd8-4fe6-9795-6eb1c9a1f16d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5b981a93-8dd8-4fe6-9795-6eb1c9a1f16d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.00390492s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-828689 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (55.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh -n functional-828689 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 cp functional-828689:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4259578541/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh -n functional-828689 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh -n functional-828689 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-828689 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-pfn89" [62648495-3121-4b53-837b-f9085623245d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-pfn89" [62648495-3121-4b53-837b-f9085623245d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.006104326s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-828689 exec mysql-64454c8b5c-pfn89 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-828689 exec mysql-64454c8b5c-pfn89 -- mysql -ppassword -e "show databases;": exit status 1 (194.984074ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-828689 exec mysql-64454c8b5c-pfn89 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-828689 exec mysql-64454c8b5c-pfn89 -- mysql -ppassword -e "show databases;": exit status 1 (321.141095ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-828689 exec mysql-64454c8b5c-pfn89 -- mysql -ppassword -e "show databases;"
2024/04/29 18:58:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (27.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/15124/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "sudo cat /etc/test/nested/copy/15124/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/15124.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "sudo cat /etc/ssl/certs/15124.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/15124.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "sudo cat /usr/share/ca-certificates/15124.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/151242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "sudo cat /etc/ssl/certs/151242.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/151242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "sudo cat /usr/share/ca-certificates/151242.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-828689 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828689 ssh "sudo systemctl is-active docker": exit status 1 (272.126786ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828689 ssh "sudo systemctl is-active containerd": exit status 1 (250.091728ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-828689 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-828689 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-p7gzn" [73d86911-c74c-4ab3-b968-20a5bb1bf775] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-p7gzn" [73d86911-c74c-4ab3-b968-20a5bb1bf775] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00517615s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "283.977587ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "67.695135ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "282.433927ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "55.782215ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-828689 /tmp/TestFunctionalparallelMountCmdany-port1970283839/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714417073439299672" to /tmp/TestFunctionalparallelMountCmdany-port1970283839/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714417073439299672" to /tmp/TestFunctionalparallelMountCmdany-port1970283839/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714417073439299672" to /tmp/TestFunctionalparallelMountCmdany-port1970283839/001/test-1714417073439299672
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828689 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (228.455523ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 29 18:57 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 29 18:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 29 18:57 test-1714417073439299672
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh cat /mount-9p/test-1714417073439299672
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-828689 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3c96aa74-9db6-4c5b-b3b3-67ac1c126f66] Pending
helpers_test.go:344: "busybox-mount" [3c96aa74-9db6-4c5b-b3b3-67ac1c126f66] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3c96aa74-9db6-4c5b-b3b3-67ac1c126f66] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3c96aa74-9db6-4c5b-b3b3-67ac1c126f66] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004784168s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-828689 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-828689 /tmp/TestFunctionalparallelMountCmdany-port1970283839/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 service list -o json
functional_test.go:1490: Took "270.663148ms" to run "out/minikube-linux-amd64 -p functional-828689 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.72:31610
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.72:31610
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-828689 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-828689
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-828689 image ls --format short --alsologtostderr:
I0429 18:58:26.187668   26303 out.go:291] Setting OutFile to fd 1 ...
I0429 18:58:26.187786   26303 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 18:58:26.187796   26303 out.go:304] Setting ErrFile to fd 2...
I0429 18:58:26.187802   26303 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 18:58:26.188038   26303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
I0429 18:58:26.188632   26303 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 18:58:26.188754   26303 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 18:58:26.189163   26303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 18:58:26.189216   26303 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 18:58:26.205883   26303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
I0429 18:58:26.206414   26303 main.go:141] libmachine: () Calling .GetVersion
I0429 18:58:26.207042   26303 main.go:141] libmachine: Using API Version  1
I0429 18:58:26.207065   26303 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 18:58:26.207414   26303 main.go:141] libmachine: () Calling .GetMachineName
I0429 18:58:26.207617   26303 main.go:141] libmachine: (functional-828689) Calling .GetState
I0429 18:58:26.209452   26303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 18:58:26.209491   26303 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 18:58:26.224436   26303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44629
I0429 18:58:26.224986   26303 main.go:141] libmachine: () Calling .GetVersion
I0429 18:58:26.225556   26303 main.go:141] libmachine: Using API Version  1
I0429 18:58:26.225592   26303 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 18:58:26.225904   26303 main.go:141] libmachine: () Calling .GetMachineName
I0429 18:58:26.226149   26303 main.go:141] libmachine: (functional-828689) Calling .DriverName
I0429 18:58:26.226359   26303 ssh_runner.go:195] Run: systemctl --version
I0429 18:58:26.226387   26303 main.go:141] libmachine: (functional-828689) Calling .GetSSHHostname
I0429 18:58:26.229193   26303 main.go:141] libmachine: (functional-828689) DBG | domain functional-828689 has defined MAC address 52:54:00:39:76:01 in network mk-functional-828689
I0429 18:58:26.229595   26303 main.go:141] libmachine: (functional-828689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:76:01", ip: ""} in network mk-functional-828689: {Iface:virbr1 ExpiryTime:2024-04-29 19:53:44 +0000 UTC Type:0 Mac:52:54:00:39:76:01 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-828689 Clientid:01:52:54:00:39:76:01}
I0429 18:58:26.229631   26303 main.go:141] libmachine: (functional-828689) DBG | domain functional-828689 has defined IP address 192.168.39.72 and MAC address 52:54:00:39:76:01 in network mk-functional-828689
I0429 18:58:26.229799   26303 main.go:141] libmachine: (functional-828689) Calling .GetSSHPort
I0429 18:58:26.229979   26303 main.go:141] libmachine: (functional-828689) Calling .GetSSHKeyPath
I0429 18:58:26.230159   26303 main.go:141] libmachine: (functional-828689) Calling .GetSSHUsername
I0429 18:58:26.230341   26303 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/functional-828689/id_rsa Username:docker}
I0429 18:58:26.321455   26303 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 18:58:26.369498   26303 main.go:141] libmachine: Making call to close driver server
I0429 18:58:26.369511   26303 main.go:141] libmachine: (functional-828689) Calling .Close
I0429 18:58:26.369786   26303 main.go:141] libmachine: Successfully made call to close driver server
I0429 18:58:26.369810   26303 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 18:58:26.369824   26303 main.go:141] libmachine: (functional-828689) DBG | Closing plugin on server side
I0429 18:58:26.369831   26303 main.go:141] libmachine: Making call to close driver server
I0429 18:58:26.369842   26303 main.go:141] libmachine: (functional-828689) Calling .Close
I0429 18:58:26.370055   26303 main.go:141] libmachine: Successfully made call to close driver server
I0429 18:58:26.370082   26303 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-828689 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.30.0            | c42f13656d0b2 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.0            | a0bf559e280cf | 85.9MB |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 259c8277fcbbc | 63MB   |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| docker.io/library/nginx                 | latest             | 7383c266ef252 | 192MB  |
| localhost/minikube-local-cache-test     | functional-828689  | 56686b9f57b63 | 3.33kB |
| localhost/my-image                      | functional-828689  | 834845f7bcacf | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.30.0            | c7aad43836fa5 | 112MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-828689 image ls --format table --alsologtostderr:
I0429 18:58:32.078838   26509 out.go:291] Setting OutFile to fd 1 ...
I0429 18:58:32.078991   26509 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 18:58:32.079002   26509 out.go:304] Setting ErrFile to fd 2...
I0429 18:58:32.079006   26509 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 18:58:32.079211   26509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
I0429 18:58:32.079817   26509 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 18:58:32.079914   26509 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 18:58:32.080290   26509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 18:58:32.080363   26509 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 18:58:32.095075   26509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
I0429 18:58:32.095497   26509 main.go:141] libmachine: () Calling .GetVersion
I0429 18:58:32.096091   26509 main.go:141] libmachine: Using API Version  1
I0429 18:58:32.096113   26509 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 18:58:32.096440   26509 main.go:141] libmachine: () Calling .GetMachineName
I0429 18:58:32.096633   26509 main.go:141] libmachine: (functional-828689) Calling .GetState
I0429 18:58:32.098428   26509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 18:58:32.098465   26509 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 18:58:32.113346   26509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
I0429 18:58:32.113788   26509 main.go:141] libmachine: () Calling .GetVersion
I0429 18:58:32.114511   26509 main.go:141] libmachine: Using API Version  1
I0429 18:58:32.114551   26509 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 18:58:32.114893   26509 main.go:141] libmachine: () Calling .GetMachineName
I0429 18:58:32.115049   26509 main.go:141] libmachine: (functional-828689) Calling .DriverName
I0429 18:58:32.115247   26509 ssh_runner.go:195] Run: systemctl --version
I0429 18:58:32.115269   26509 main.go:141] libmachine: (functional-828689) Calling .GetSSHHostname
I0429 18:58:32.117963   26509 main.go:141] libmachine: (functional-828689) DBG | domain functional-828689 has defined MAC address 52:54:00:39:76:01 in network mk-functional-828689
I0429 18:58:32.118418   26509 main.go:141] libmachine: (functional-828689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:76:01", ip: ""} in network mk-functional-828689: {Iface:virbr1 ExpiryTime:2024-04-29 19:53:44 +0000 UTC Type:0 Mac:52:54:00:39:76:01 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-828689 Clientid:01:52:54:00:39:76:01}
I0429 18:58:32.118452   26509 main.go:141] libmachine: (functional-828689) DBG | domain functional-828689 has defined IP address 192.168.39.72 and MAC address 52:54:00:39:76:01 in network mk-functional-828689
I0429 18:58:32.118537   26509 main.go:141] libmachine: (functional-828689) Calling .GetSSHPort
I0429 18:58:32.118722   26509 main.go:141] libmachine: (functional-828689) Calling .GetSSHKeyPath
I0429 18:58:32.118874   26509 main.go:141] libmachine: (functional-828689) Calling .GetSSHUsername
I0429 18:58:32.119054   26509 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/functional-828689/id_rsa Username:docker}
I0429 18:58:32.243251   26509 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 18:58:32.384978   26509 main.go:141] libmachine: Making call to close driver server
I0429 18:58:32.385000   26509 main.go:141] libmachine: (functional-828689) Calling .Close
I0429 18:58:32.385294   26509 main.go:141] libmachine: Successfully made call to close driver server
I0429 18:58:32.385314   26509 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 18:58:32.385331   26509 main.go:141] libmachine: Making call to close driver server
I0429 18:58:32.385339   26509 main.go:141] libmachine: (functional-828689) Calling .Close
I0429 18:58:32.385594   26509 main.go:141] libmachine: Successfully made call to close driver server
I0429 18:58:32.385605   26509 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 18:58:32.385651   26509 main.go:141] libmachine: (functional-828689) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-828689 image ls --format json --alsologtostderr:
[{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":["docker.io/library/nginx@sha256:4d5a113fd08c4dd57aae6870942f8ab4a7d5fd1594b9749c4ae1b505cfd1e7d8","docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee"],"repoTags":["docker.io/library/nginx:latest"],"size":"191760844"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"834845f7bcacf14217463768968bbc27a10eb20a007406cb2bfcd98ca0593ae2","repoDigests":["localhost/my-image@sha256:6e70504a52cbad0ff904050ed43cee2f63b13723702739e9969a285d6f6daccf"],"repoTags":["localhost/my-image:functional-828689"],"size":"1468599"},{"id":"c7aad43836f
a5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"112170310"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"c42f13656d0b2e
905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117609952"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092c
e206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"85932953"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67","registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"63026502"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"
id":"1e31a3f279cbc74f3cad120bf4512390245cacef5d6f1812decd931169363f22","repoDigests":["docker.io/library/8a2264ab8f718b8491de8201d80a905c39066d4a0acff030292159dd2d840c6b-tmp@sha256:b007675dc97f6baa4cee11f58dd7b6d6fc6911c8985059f1f5f22adb5b8028c6"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f7671
3cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"56686b9f57b63c70cc53aa4af01c6d60e437f5475286c823a7af68a070bfd089","repoDigests":["localhost/minikube-local-cache-test@sha256:c7deabbf32a2c546ad46d9cef41aa6c1e41289baf9087711fa59ae929677ef89"],"repoTags":["localhost/minikube-local-cache-test:functional-828689"],"size":"3328"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e006087
28567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-828689 image ls --format json --alsologtostderr:
I0429 18:58:31.795717   26486 out.go:291] Setting OutFile to fd 1 ...
I0429 18:58:31.795837   26486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 18:58:31.795848   26486 out.go:304] Setting ErrFile to fd 2...
I0429 18:58:31.795853   26486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 18:58:31.796063   26486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
I0429 18:58:31.796665   26486 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 18:58:31.796792   26486 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 18:58:31.797198   26486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 18:58:31.797237   26486 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 18:58:31.811686   26486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
I0429 18:58:31.812197   26486 main.go:141] libmachine: () Calling .GetVersion
I0429 18:58:31.812866   26486 main.go:141] libmachine: Using API Version  1
I0429 18:58:31.812916   26486 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 18:58:31.813253   26486 main.go:141] libmachine: () Calling .GetMachineName
I0429 18:58:31.813441   26486 main.go:141] libmachine: (functional-828689) Calling .GetState
I0429 18:58:31.815192   26486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 18:58:31.815232   26486 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 18:58:31.829959   26486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43405
I0429 18:58:31.830356   26486 main.go:141] libmachine: () Calling .GetVersion
I0429 18:58:31.830799   26486 main.go:141] libmachine: Using API Version  1
I0429 18:58:31.830834   26486 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 18:58:31.831135   26486 main.go:141] libmachine: () Calling .GetMachineName
I0429 18:58:31.831322   26486 main.go:141] libmachine: (functional-828689) Calling .DriverName
I0429 18:58:31.831568   26486 ssh_runner.go:195] Run: systemctl --version
I0429 18:58:31.831587   26486 main.go:141] libmachine: (functional-828689) Calling .GetSSHHostname
I0429 18:58:31.834211   26486 main.go:141] libmachine: (functional-828689) DBG | domain functional-828689 has defined MAC address 52:54:00:39:76:01 in network mk-functional-828689
I0429 18:58:31.834634   26486 main.go:141] libmachine: (functional-828689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:76:01", ip: ""} in network mk-functional-828689: {Iface:virbr1 ExpiryTime:2024-04-29 19:53:44 +0000 UTC Type:0 Mac:52:54:00:39:76:01 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-828689 Clientid:01:52:54:00:39:76:01}
I0429 18:58:31.834661   26486 main.go:141] libmachine: (functional-828689) DBG | domain functional-828689 has defined IP address 192.168.39.72 and MAC address 52:54:00:39:76:01 in network mk-functional-828689
I0429 18:58:31.834913   26486 main.go:141] libmachine: (functional-828689) Calling .GetSSHPort
I0429 18:58:31.835058   26486 main.go:141] libmachine: (functional-828689) Calling .GetSSHKeyPath
I0429 18:58:31.835217   26486 main.go:141] libmachine: (functional-828689) Calling .GetSSHUsername
I0429 18:58:31.835397   26486 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/functional-828689/id_rsa Username:docker}
I0429 18:58:31.948081   26486 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 18:58:32.007907   26486 main.go:141] libmachine: Making call to close driver server
I0429 18:58:32.007924   26486 main.go:141] libmachine: (functional-828689) Calling .Close
I0429 18:58:32.008226   26486 main.go:141] libmachine: Successfully made call to close driver server
I0429 18:58:32.008244   26486 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 18:58:32.008257   26486 main.go:141] libmachine: Making call to close driver server
I0429 18:58:32.008265   26486 main.go:141] libmachine: (functional-828689) Calling .Close
I0429 18:58:32.008265   26486 main.go:141] libmachine: (functional-828689) DBG | Closing plugin on server side
I0429 18:58:32.010333   26486 main.go:141] libmachine: Successfully made call to close driver server
I0429 18:58:32.010354   26486 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-828689 image ls --format yaml --alsologtostderr:
- id: 56686b9f57b63c70cc53aa4af01c6d60e437f5475286c823a7af68a070bfd089
repoDigests:
- localhost/minikube-local-cache-test@sha256:c7deabbf32a2c546ad46d9cef41aa6c1e41289baf9087711fa59ae929677ef89
repoTags:
- localhost/minikube-local-cache-test:functional-828689
size: "3328"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117609952"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "112170310"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests:
- docker.io/library/nginx@sha256:4d5a113fd08c4dd57aae6870942f8ab4a7d5fd1594b9749c4ae1b505cfd1e7d8
- docker.io/library/nginx@sha256:ed6d2c43c8fbcd3eaa44c9dab6d94cb346234476230dc1681227aa72d07181ee
repoTags:
- docker.io/library/nginx:latest
size: "191760844"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "85932953"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
- registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "63026502"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-828689 image ls --format yaml --alsologtostderr:
I0429 18:58:26.429169   26328 out.go:291] Setting OutFile to fd 1 ...
I0429 18:58:26.429492   26328 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 18:58:26.429505   26328 out.go:304] Setting ErrFile to fd 2...
I0429 18:58:26.429509   26328 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 18:58:26.429702   26328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
I0429 18:58:26.430271   26328 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 18:58:26.430372   26328 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 18:58:26.430763   26328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 18:58:26.430801   26328 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 18:58:26.446224   26328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45645
I0429 18:58:26.446759   26328 main.go:141] libmachine: () Calling .GetVersion
I0429 18:58:26.447325   26328 main.go:141] libmachine: Using API Version  1
I0429 18:58:26.447347   26328 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 18:58:26.447727   26328 main.go:141] libmachine: () Calling .GetMachineName
I0429 18:58:26.447912   26328 main.go:141] libmachine: (functional-828689) Calling .GetState
I0429 18:58:26.449651   26328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 18:58:26.449692   26328 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 18:58:26.464768   26328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46013
I0429 18:58:26.465203   26328 main.go:141] libmachine: () Calling .GetVersion
I0429 18:58:26.465775   26328 main.go:141] libmachine: Using API Version  1
I0429 18:58:26.465825   26328 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 18:58:26.466133   26328 main.go:141] libmachine: () Calling .GetMachineName
I0429 18:58:26.466338   26328 main.go:141] libmachine: (functional-828689) Calling .DriverName
I0429 18:58:26.466599   26328 ssh_runner.go:195] Run: systemctl --version
I0429 18:58:26.466620   26328 main.go:141] libmachine: (functional-828689) Calling .GetSSHHostname
I0429 18:58:26.469850   26328 main.go:141] libmachine: (functional-828689) DBG | domain functional-828689 has defined MAC address 52:54:00:39:76:01 in network mk-functional-828689
I0429 18:58:26.470309   26328 main.go:141] libmachine: (functional-828689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:76:01", ip: ""} in network mk-functional-828689: {Iface:virbr1 ExpiryTime:2024-04-29 19:53:44 +0000 UTC Type:0 Mac:52:54:00:39:76:01 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-828689 Clientid:01:52:54:00:39:76:01}
I0429 18:58:26.470338   26328 main.go:141] libmachine: (functional-828689) DBG | domain functional-828689 has defined IP address 192.168.39.72 and MAC address 52:54:00:39:76:01 in network mk-functional-828689
I0429 18:58:26.470514   26328 main.go:141] libmachine: (functional-828689) Calling .GetSSHPort
I0429 18:58:26.470687   26328 main.go:141] libmachine: (functional-828689) Calling .GetSSHKeyPath
I0429 18:58:26.470860   26328 main.go:141] libmachine: (functional-828689) Calling .GetSSHUsername
I0429 18:58:26.471017   26328 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/functional-828689/id_rsa Username:docker}
I0429 18:58:26.559054   26328 ssh_runner.go:195] Run: sudo crictl images --output json
I0429 18:58:26.645076   26328 main.go:141] libmachine: Making call to close driver server
I0429 18:58:26.645093   26328 main.go:141] libmachine: (functional-828689) Calling .Close
I0429 18:58:26.645425   26328 main.go:141] libmachine: (functional-828689) DBG | Closing plugin on server side
I0429 18:58:26.645429   26328 main.go:141] libmachine: Successfully made call to close driver server
I0429 18:58:26.645446   26328 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 18:58:26.645457   26328 main.go:141] libmachine: Making call to close driver server
I0429 18:58:26.645471   26328 main.go:141] libmachine: (functional-828689) Calling .Close
I0429 18:58:26.645705   26328 main.go:141] libmachine: Successfully made call to close driver server
I0429 18:58:26.645732   26328 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 18:58:26.645735   26328 main.go:141] libmachine: (functional-828689) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828689 ssh pgrep buildkitd: exit status 1 (242.704867ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image build -t localhost/my-image:functional-828689 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-828689 image build -t localhost/my-image:functional-828689 testdata/build --alsologtostderr: (4.463888845s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-828689 image build -t localhost/my-image:functional-828689 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1e31a3f279c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-828689
--> 834845f7bca
Successfully tagged localhost/my-image:functional-828689
834845f7bcacf14217463768968bbc27a10eb20a007406cb2bfcd98ca0593ae2
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-828689 image build -t localhost/my-image:functional-828689 testdata/build --alsologtostderr:
I0429 18:58:26.958576   26386 out.go:291] Setting OutFile to fd 1 ...
I0429 18:58:26.958793   26386 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 18:58:26.958808   26386 out.go:304] Setting ErrFile to fd 2...
I0429 18:58:26.958817   26386 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 18:58:26.959116   26386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
I0429 18:58:26.959959   26386 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 18:58:26.960569   26386 config.go:182] Loaded profile config "functional-828689": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0429 18:58:26.960973   26386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 18:58:26.961010   26386 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 18:58:26.976434   26386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37953
I0429 18:58:26.976961   26386 main.go:141] libmachine: () Calling .GetVersion
I0429 18:58:26.977591   26386 main.go:141] libmachine: Using API Version  1
I0429 18:58:26.977619   26386 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 18:58:26.978045   26386 main.go:141] libmachine: () Calling .GetMachineName
I0429 18:58:26.978245   26386 main.go:141] libmachine: (functional-828689) Calling .GetState
I0429 18:58:26.980307   26386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0429 18:58:26.980358   26386 main.go:141] libmachine: Launching plugin server for driver kvm2
I0429 18:58:26.995193   26386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
I0429 18:58:26.995629   26386 main.go:141] libmachine: () Calling .GetVersion
I0429 18:58:26.996122   26386 main.go:141] libmachine: Using API Version  1
I0429 18:58:26.996146   26386 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 18:58:26.996494   26386 main.go:141] libmachine: () Calling .GetMachineName
I0429 18:58:26.996678   26386 main.go:141] libmachine: (functional-828689) Calling .DriverName
I0429 18:58:26.996915   26386 ssh_runner.go:195] Run: systemctl --version
I0429 18:58:26.996945   26386 main.go:141] libmachine: (functional-828689) Calling .GetSSHHostname
I0429 18:58:26.999454   26386 main.go:141] libmachine: (functional-828689) DBG | domain functional-828689 has defined MAC address 52:54:00:39:76:01 in network mk-functional-828689
I0429 18:58:26.999816   26386 main.go:141] libmachine: (functional-828689) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:76:01", ip: ""} in network mk-functional-828689: {Iface:virbr1 ExpiryTime:2024-04-29 19:53:44 +0000 UTC Type:0 Mac:52:54:00:39:76:01 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-828689 Clientid:01:52:54:00:39:76:01}
I0429 18:58:26.999842   26386 main.go:141] libmachine: (functional-828689) DBG | domain functional-828689 has defined IP address 192.168.39.72 and MAC address 52:54:00:39:76:01 in network mk-functional-828689
I0429 18:58:26.999999   26386 main.go:141] libmachine: (functional-828689) Calling .GetSSHPort
I0429 18:58:27.000135   26386 main.go:141] libmachine: (functional-828689) Calling .GetSSHKeyPath
I0429 18:58:27.000255   26386 main.go:141] libmachine: (functional-828689) Calling .GetSSHUsername
I0429 18:58:27.000364   26386 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/functional-828689/id_rsa Username:docker}
I0429 18:58:27.150886   26386 build_images.go:161] Building image from path: /tmp/build.623129173.tar
I0429 18:58:27.150961   26386 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0429 18:58:27.181601   26386 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.623129173.tar
I0429 18:58:27.205253   26386 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.623129173.tar: stat -c "%s %y" /var/lib/minikube/build/build.623129173.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.623129173.tar': No such file or directory
I0429 18:58:27.205300   26386 ssh_runner.go:362] scp /tmp/build.623129173.tar --> /var/lib/minikube/build/build.623129173.tar (3072 bytes)
I0429 18:58:27.252970   26386 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.623129173
I0429 18:58:27.266086   26386 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.623129173 -xf /var/lib/minikube/build/build.623129173.tar
I0429 18:58:27.283188   26386 crio.go:315] Building image: /var/lib/minikube/build/build.623129173
I0429 18:58:27.283277   26386 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-828689 /var/lib/minikube/build/build.623129173 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0429 18:58:31.277916   26386 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-828689 /var/lib/minikube/build/build.623129173 --cgroup-manager=cgroupfs: (3.994605563s)
I0429 18:58:31.277984   26386 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.623129173
I0429 18:58:31.313518   26386 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.623129173.tar
I0429 18:58:31.353315   26386 build_images.go:217] Built localhost/my-image:functional-828689 from /tmp/build.623129173.tar
I0429 18:58:31.353350   26386 build_images.go:133] succeeded building to: functional-828689
I0429 18:58:31.353357   26386 build_images.go:134] failed building to: 
I0429 18:58:31.353383   26386 main.go:141] libmachine: Making call to close driver server
I0429 18:58:31.353400   26386 main.go:141] libmachine: (functional-828689) Calling .Close
I0429 18:58:31.353664   26386 main.go:141] libmachine: Successfully made call to close driver server
I0429 18:58:31.353688   26386 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 18:58:31.353696   26386 main.go:141] libmachine: Making call to close driver server
I0429 18:58:31.353711   26386 main.go:141] libmachine: (functional-828689) Calling .Close
I0429 18:58:31.353729   26386 main.go:141] libmachine: (functional-828689) DBG | Closing plugin on server side
I0429 18:58:31.353975   26386 main.go:141] libmachine: Successfully made call to close driver server
I0429 18:58:31.354004   26386 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 18:58:31.354016   26386 main.go:141] libmachine: (functional-828689) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.098427158s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-828689
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-828689 /tmp/TestFunctionalparallelMountCmdspecific-port3733955830/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828689 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.091398ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-828689 /tmp/TestFunctionalparallelMountCmdspecific-port3733955830/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828689 ssh "sudo umount -f /mount-9p": exit status 1 (282.612797ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-828689 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-828689 /tmp/TestFunctionalparallelMountCmdspecific-port3733955830/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image load --daemon gcr.io/google-containers/addon-resizer:functional-828689 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-828689 image load --daemon gcr.io/google-containers/addon-resizer:functional-828689 --alsologtostderr: (6.410824764s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-828689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4027601774/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-828689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4027601774/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-828689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4027601774/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-828689 ssh "findmnt -T" /mount1: exit status 1 (386.19506ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-828689 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-828689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4027601774/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-828689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4027601774/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-828689 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4027601774/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image load --daemon gcr.io/google-containers/addon-resizer:functional-828689 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-828689 image load --daemon gcr.io/google-containers/addon-resizer:functional-828689 --alsologtostderr: (2.7723201s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.960678603s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-828689
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image load --daemon gcr.io/google-containers/addon-resizer:functional-828689 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-828689 image load --daemon gcr.io/google-containers/addon-resizer:functional-828689 --alsologtostderr: (5.716210859s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image rm gcr.io/google-containers/addon-resizer:functional-828689 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-828689
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-828689 image save --daemon gcr.io/google-containers/addon-resizer:functional-828689 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-828689
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-828689
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-828689
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-828689
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (283.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-058855 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0429 18:59:00.893871   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 18:59:28.583769   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 19:02:48.915014   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:02:48.920348   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:02:48.930611   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:02:48.950919   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:02:48.991093   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:02:49.071468   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:02:49.231779   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:02:49.552218   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:02:50.193257   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:02:51.473976   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:02:54.034223   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:02:59.154768   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:03:09.395554   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-058855 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m42.326351806s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (283.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- rollout status deployment/busybox
E0429 19:03:29.876536   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-058855 -- rollout status deployment/busybox: (5.898308351s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-nst7c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-pr84n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-xll26 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-nst7c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-pr84n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-xll26 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-nst7c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-pr84n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-xll26 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-nst7c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-nst7c -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-pr84n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-pr84n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-xll26 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-058855 -- exec busybox-fc5497c4f-xll26 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-058855 -v=7 --alsologtostderr
E0429 19:04:00.893450   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 19:04:10.837110   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-058855 -v=7 --alsologtostderr: (46.081657317s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-058855 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp testdata/cp-test.txt ha-058855:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1826286980/001/cp-test_ha-058855.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855:/home/docker/cp-test.txt ha-058855-m02:/home/docker/cp-test_ha-058855_ha-058855-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m02 "sudo cat /home/docker/cp-test_ha-058855_ha-058855-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855:/home/docker/cp-test.txt ha-058855-m03:/home/docker/cp-test_ha-058855_ha-058855-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m03 "sudo cat /home/docker/cp-test_ha-058855_ha-058855-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855:/home/docker/cp-test.txt ha-058855-m04:/home/docker/cp-test_ha-058855_ha-058855-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m04 "sudo cat /home/docker/cp-test_ha-058855_ha-058855-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp testdata/cp-test.txt ha-058855-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1826286980/001/cp-test_ha-058855-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855-m02:/home/docker/cp-test.txt ha-058855:/home/docker/cp-test_ha-058855-m02_ha-058855.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855 "sudo cat /home/docker/cp-test_ha-058855-m02_ha-058855.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855-m02:/home/docker/cp-test.txt ha-058855-m03:/home/docker/cp-test_ha-058855-m02_ha-058855-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m03 "sudo cat /home/docker/cp-test_ha-058855-m02_ha-058855-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855-m02:/home/docker/cp-test.txt ha-058855-m04:/home/docker/cp-test_ha-058855-m02_ha-058855-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m04 "sudo cat /home/docker/cp-test_ha-058855-m02_ha-058855-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp testdata/cp-test.txt ha-058855-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1826286980/001/cp-test_ha-058855-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt ha-058855:/home/docker/cp-test_ha-058855-m03_ha-058855.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855 "sudo cat /home/docker/cp-test_ha-058855-m03_ha-058855.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt ha-058855-m02:/home/docker/cp-test_ha-058855-m03_ha-058855-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m02 "sudo cat /home/docker/cp-test_ha-058855-m03_ha-058855-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855-m03:/home/docker/cp-test.txt ha-058855-m04:/home/docker/cp-test_ha-058855-m03_ha-058855-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m04 "sudo cat /home/docker/cp-test_ha-058855-m03_ha-058855-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp testdata/cp-test.txt ha-058855-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1826286980/001/cp-test_ha-058855-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt ha-058855:/home/docker/cp-test_ha-058855-m04_ha-058855.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855 "sudo cat /home/docker/cp-test_ha-058855-m04_ha-058855.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt ha-058855-m02:/home/docker/cp-test_ha-058855-m04_ha-058855-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m02 "sudo cat /home/docker/cp-test_ha-058855-m04_ha-058855-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 cp ha-058855-m04:/home/docker/cp-test.txt ha-058855-m03:/home/docker/cp-test_ha-058855-m04_ha-058855-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 ssh -n ha-058855-m03 "sudo cat /home/docker/cp-test_ha-058855-m04_ha-058855-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.526912652s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-058855 node delete m03 -v=7 --alsologtostderr: (16.882986935s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (293.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-058855 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0429 19:17:48.914367   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 19:19:00.893758   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 19:19:11.958480   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-058855 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m52.586012438s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (293.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-058855 --control-plane -v=7 --alsologtostderr
E0429 19:22:48.915194   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-058855 --control-plane -v=7 --alsologtostderr: (1m18.49981258s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-058855 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-629445 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0429 19:24:00.893451   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-629445 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.535943946s)
--- PASS: TestJSONOutput/start/Command (58.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-629445 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-629445 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-629445 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-629445 --output=json --user=testUser: (7.422656293s)
--- PASS: TestJSONOutput/stop/Command (7.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-289865 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-289865 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.472711ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0e183944-ba35-43f4-9c16-76dca140db17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-289865] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c49b1cf-a223-49a1-9a86-cf24908fc839","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18774"}}
	{"specversion":"1.0","id":"fc3c42e1-2ae3-46fe-92ed-93dbd29860ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"428fe134-12c4-496c-9c63-fc1f023ce599","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig"}}
	{"specversion":"1.0","id":"6d7bfd4c-4ad9-456a-9e64-7cb5977a9708","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube"}}
	{"specversion":"1.0","id":"ba14f495-eab0-4b2f-8bd1-bdb7767b3c32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8835ec17-01c3-4902-b862-244110dbdbc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"38584f4d-ad49-49b5-898c-7edf716fa293","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-289865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-289865
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (100.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-735562 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-735562 --driver=kvm2  --container-runtime=crio: (47.103657664s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-738652 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-738652 --driver=kvm2  --container-runtime=crio: (50.545769213s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-735562
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-738652
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-738652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-738652
helpers_test.go:175: Cleaning up "first-735562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-735562
--- PASS: TestMinikubeProfile (100.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-687574 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0429 19:27:03.948959   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-687574 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.455235736s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-687574 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-687574 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-700746 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-700746 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.92261245s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-700746 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-700746 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-687574 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-700746 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-700746 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-700746
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-700746: (2.29218249s)
--- PASS: TestMountStart/serial/Stop (2.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-700746
E0429 19:27:48.915614   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-700746: (22.573783646s)
--- PASS: TestMountStart/serial/RestartStopped (23.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-700746 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-700746 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-773806 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0429 19:29:00.894381   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-773806 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m48.648386426s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-773806 -- rollout status deployment/busybox: (4.147755465s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- exec busybox-fc5497c4f-b9pvl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- exec busybox-fc5497c4f-rd8tm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- exec busybox-fc5497c4f-b9pvl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- exec busybox-fc5497c4f-rd8tm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- exec busybox-fc5497c4f-b9pvl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- exec busybox-fc5497c4f-rd8tm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- exec busybox-fc5497c4f-b9pvl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- exec busybox-fc5497c4f-b9pvl -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- exec busybox-fc5497c4f-rd8tm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-773806 -- exec busybox-fc5497c4f-rd8tm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-773806 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-773806 -v 3 --alsologtostderr: (40.69349223s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.27s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-773806 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 cp testdata/cp-test.txt multinode-773806:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 cp multinode-773806:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1658952582/001/cp-test_multinode-773806.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 cp multinode-773806:/home/docker/cp-test.txt multinode-773806-m02:/home/docker/cp-test_multinode-773806_multinode-773806-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806-m02 "sudo cat /home/docker/cp-test_multinode-773806_multinode-773806-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 cp multinode-773806:/home/docker/cp-test.txt multinode-773806-m03:/home/docker/cp-test_multinode-773806_multinode-773806-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806-m03 "sudo cat /home/docker/cp-test_multinode-773806_multinode-773806-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 cp testdata/cp-test.txt multinode-773806-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 cp multinode-773806-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1658952582/001/cp-test_multinode-773806-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 cp multinode-773806-m02:/home/docker/cp-test.txt multinode-773806:/home/docker/cp-test_multinode-773806-m02_multinode-773806.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806 "sudo cat /home/docker/cp-test_multinode-773806-m02_multinode-773806.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 cp multinode-773806-m02:/home/docker/cp-test.txt multinode-773806-m03:/home/docker/cp-test_multinode-773806-m02_multinode-773806-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806-m03 "sudo cat /home/docker/cp-test_multinode-773806-m02_multinode-773806-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 cp testdata/cp-test.txt multinode-773806-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 cp multinode-773806-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1658952582/001/cp-test_multinode-773806-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 cp multinode-773806-m03:/home/docker/cp-test.txt multinode-773806:/home/docker/cp-test_multinode-773806-m03_multinode-773806.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806 "sudo cat /home/docker/cp-test_multinode-773806-m03_multinode-773806.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 cp multinode-773806-m03:/home/docker/cp-test.txt multinode-773806-m02:/home/docker/cp-test_multinode-773806-m03_multinode-773806-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 ssh -n multinode-773806-m02 "sudo cat /home/docker/cp-test_multinode-773806-m03_multinode-773806-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-773806 node stop m03: (1.575612236s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-773806 status: exit status 7 (438.535467ms)

                                                
                                                
-- stdout --
	multinode-773806
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-773806-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-773806-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-773806 status --alsologtostderr: exit status 7 (441.502534ms)

                                                
                                                
-- stdout --
	multinode-773806
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-773806-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-773806-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:30:58.565023   48299 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:30:58.565120   48299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:30:58.565129   48299 out.go:304] Setting ErrFile to fd 2...
	I0429 19:30:58.565133   48299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:30:58.565330   48299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:30:58.565487   48299 out.go:298] Setting JSON to false
	I0429 19:30:58.565520   48299 mustload.go:65] Loading cluster: multinode-773806
	I0429 19:30:58.565575   48299 notify.go:220] Checking for updates...
	I0429 19:30:58.566016   48299 config.go:182] Loaded profile config "multinode-773806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:30:58.566035   48299 status.go:255] checking status of multinode-773806 ...
	I0429 19:30:58.566460   48299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:30:58.566511   48299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:30:58.582509   48299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0429 19:30:58.582886   48299 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:30:58.583479   48299 main.go:141] libmachine: Using API Version  1
	I0429 19:30:58.583507   48299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:30:58.583944   48299 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:30:58.584174   48299 main.go:141] libmachine: (multinode-773806) Calling .GetState
	I0429 19:30:58.585827   48299 status.go:330] multinode-773806 host status = "Running" (err=<nil>)
	I0429 19:30:58.585842   48299 host.go:66] Checking if "multinode-773806" exists ...
	I0429 19:30:58.586188   48299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:30:58.586235   48299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:30:58.601466   48299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35839
	I0429 19:30:58.601896   48299 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:30:58.602383   48299 main.go:141] libmachine: Using API Version  1
	I0429 19:30:58.602400   48299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:30:58.602680   48299 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:30:58.602860   48299 main.go:141] libmachine: (multinode-773806) Calling .GetIP
	I0429 19:30:58.605748   48299 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:30:58.606238   48299 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:30:58.606272   48299 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:30:58.606444   48299 host.go:66] Checking if "multinode-773806" exists ...
	I0429 19:30:58.606846   48299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:30:58.606892   48299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:30:58.623362   48299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I0429 19:30:58.623804   48299 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:30:58.624244   48299 main.go:141] libmachine: Using API Version  1
	I0429 19:30:58.624270   48299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:30:58.624624   48299 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:30:58.624824   48299 main.go:141] libmachine: (multinode-773806) Calling .DriverName
	I0429 19:30:58.625015   48299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:30:58.625050   48299 main.go:141] libmachine: (multinode-773806) Calling .GetSSHHostname
	I0429 19:30:58.627844   48299 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:30:58.628191   48299 main.go:141] libmachine: (multinode-773806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:83:25", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:28:27 +0000 UTC Type:0 Mac:52:54:00:19:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-773806 Clientid:01:52:54:00:19:83:25}
	I0429 19:30:58.628221   48299 main.go:141] libmachine: (multinode-773806) DBG | domain multinode-773806 has defined IP address 192.168.39.127 and MAC address 52:54:00:19:83:25 in network mk-multinode-773806
	I0429 19:30:58.628312   48299 main.go:141] libmachine: (multinode-773806) Calling .GetSSHPort
	I0429 19:30:58.628474   48299 main.go:141] libmachine: (multinode-773806) Calling .GetSSHKeyPath
	I0429 19:30:58.628645   48299 main.go:141] libmachine: (multinode-773806) Calling .GetSSHUsername
	I0429 19:30:58.628785   48299 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/multinode-773806/id_rsa Username:docker}
	I0429 19:30:58.714546   48299 ssh_runner.go:195] Run: systemctl --version
	I0429 19:30:58.721503   48299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:30:58.737901   48299 kubeconfig.go:125] found "multinode-773806" server: "https://192.168.39.127:8443"
	I0429 19:30:58.737933   48299 api_server.go:166] Checking apiserver status ...
	I0429 19:30:58.737977   48299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 19:30:58.753038   48299 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W0429 19:30:58.765363   48299 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 19:30:58.765413   48299 ssh_runner.go:195] Run: ls
	I0429 19:30:58.770852   48299 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I0429 19:30:58.776722   48299 api_server.go:279] https://192.168.39.127:8443/healthz returned 200:
	ok
	I0429 19:30:58.776747   48299 status.go:422] multinode-773806 apiserver status = Running (err=<nil>)
	I0429 19:30:58.776758   48299 status.go:257] multinode-773806 status: &{Name:multinode-773806 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:30:58.776778   48299 status.go:255] checking status of multinode-773806-m02 ...
	I0429 19:30:58.777049   48299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:30:58.777091   48299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:30:58.792086   48299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45245
	I0429 19:30:58.792463   48299 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:30:58.792905   48299 main.go:141] libmachine: Using API Version  1
	I0429 19:30:58.792931   48299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:30:58.793240   48299 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:30:58.793479   48299 main.go:141] libmachine: (multinode-773806-m02) Calling .GetState
	I0429 19:30:58.794959   48299 status.go:330] multinode-773806-m02 host status = "Running" (err=<nil>)
	I0429 19:30:58.794978   48299 host.go:66] Checking if "multinode-773806-m02" exists ...
	I0429 19:30:58.795268   48299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:30:58.795324   48299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:30:58.809996   48299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44471
	I0429 19:30:58.810416   48299 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:30:58.810831   48299 main.go:141] libmachine: Using API Version  1
	I0429 19:30:58.810851   48299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:30:58.811136   48299 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:30:58.811316   48299 main.go:141] libmachine: (multinode-773806-m02) Calling .GetIP
	I0429 19:30:58.813864   48299 main.go:141] libmachine: (multinode-773806-m02) DBG | domain multinode-773806-m02 has defined MAC address 52:54:00:41:2f:cd in network mk-multinode-773806
	I0429 19:30:58.814261   48299 main.go:141] libmachine: (multinode-773806-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2f:cd", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:29:33 +0000 UTC Type:0 Mac:52:54:00:41:2f:cd Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-773806-m02 Clientid:01:52:54:00:41:2f:cd}
	I0429 19:30:58.814290   48299 main.go:141] libmachine: (multinode-773806-m02) DBG | domain multinode-773806-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:41:2f:cd in network mk-multinode-773806
	I0429 19:30:58.814386   48299 host.go:66] Checking if "multinode-773806-m02" exists ...
	I0429 19:30:58.814770   48299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:30:58.814820   48299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:30:58.829923   48299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34931
	I0429 19:30:58.830370   48299 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:30:58.830826   48299 main.go:141] libmachine: Using API Version  1
	I0429 19:30:58.830846   48299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:30:58.831139   48299 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:30:58.831324   48299 main.go:141] libmachine: (multinode-773806-m02) Calling .DriverName
	I0429 19:30:58.831499   48299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 19:30:58.831524   48299 main.go:141] libmachine: (multinode-773806-m02) Calling .GetSSHHostname
	I0429 19:30:58.834130   48299 main.go:141] libmachine: (multinode-773806-m02) DBG | domain multinode-773806-m02 has defined MAC address 52:54:00:41:2f:cd in network mk-multinode-773806
	I0429 19:30:58.834613   48299 main.go:141] libmachine: (multinode-773806-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2f:cd", ip: ""} in network mk-multinode-773806: {Iface:virbr1 ExpiryTime:2024-04-29 20:29:33 +0000 UTC Type:0 Mac:52:54:00:41:2f:cd Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-773806-m02 Clientid:01:52:54:00:41:2f:cd}
	I0429 19:30:58.834643   48299 main.go:141] libmachine: (multinode-773806-m02) DBG | domain multinode-773806-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:41:2f:cd in network mk-multinode-773806
	I0429 19:30:58.834768   48299 main.go:141] libmachine: (multinode-773806-m02) Calling .GetSSHPort
	I0429 19:30:58.834947   48299 main.go:141] libmachine: (multinode-773806-m02) Calling .GetSSHKeyPath
	I0429 19:30:58.835100   48299 main.go:141] libmachine: (multinode-773806-m02) Calling .GetSSHUsername
	I0429 19:30:58.835244   48299 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18774-7754/.minikube/machines/multinode-773806-m02/id_rsa Username:docker}
	I0429 19:30:58.914823   48299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 19:30:58.929796   48299 status.go:257] multinode-773806-m02 status: &{Name:multinode-773806-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 19:30:58.929838   48299 status.go:255] checking status of multinode-773806-m03 ...
	I0429 19:30:58.930191   48299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0429 19:30:58.930243   48299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0429 19:30:58.946593   48299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33463
	I0429 19:30:58.947115   48299 main.go:141] libmachine: () Calling .GetVersion
	I0429 19:30:58.947605   48299 main.go:141] libmachine: Using API Version  1
	I0429 19:30:58.947626   48299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 19:30:58.947912   48299 main.go:141] libmachine: () Calling .GetMachineName
	I0429 19:30:58.948086   48299 main.go:141] libmachine: (multinode-773806-m03) Calling .GetState
	I0429 19:30:58.949460   48299 status.go:330] multinode-773806-m03 host status = "Stopped" (err=<nil>)
	I0429 19:30:58.949479   48299 status.go:343] host is not running, skipping remaining checks
	I0429 19:30:58.949487   48299 status.go:257] multinode-773806-m03 status: &{Name:multinode-773806-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-773806 node start m03 -v=7 --alsologtostderr: (31.554951772s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-773806 node delete m03: (1.765635995s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-773806 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-773806 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m58.337880922s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-773806 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-773806
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-773806-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-773806-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (72.576406ms)

                                                
                                                
-- stdout --
	* [multinode-773806-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-773806-m02' is duplicated with machine name 'multinode-773806-m02' in profile 'multinode-773806'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-773806-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-773806-m03 --driver=kvm2  --container-runtime=crio: (44.407777761s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-773806
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-773806: exit status 80 (232.656265ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-773806 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-773806-m03 already exists in multinode-773806-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-773806-m03
E0429 19:42:48.914426   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.76s)

                                                
                                    
x
+
TestScheduledStopUnix (115.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-471902 --memory=2048 --driver=kvm2  --container-runtime=crio
E0429 19:47:48.915317   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-471902 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.346865109s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-471902 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-471902 -n scheduled-stop-471902
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-471902 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-471902 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-471902 -n scheduled-stop-471902
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-471902
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-471902 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0429 19:49:00.894142   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-471902
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-471902: exit status 7 (79.575455ms)

                                                
                                                
-- stdout --
	scheduled-stop-471902
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-471902 -n scheduled-stop-471902
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-471902 -n scheduled-stop-471902: exit status 7 (74.818224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-471902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-471902
--- PASS: TestScheduledStopUnix (115.10s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (191.53s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2106553306 start -p running-upgrade-407092 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2106553306 start -p running-upgrade-407092 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m35.596124766s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-407092 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0429 19:52:48.915318   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-407092 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.068897741s)
helpers_test.go:175: Cleaning up "running-upgrade-407092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-407092
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-407092: (1.224960067s)
--- PASS: TestRunningBinaryUpgrade (191.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-699902 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-699902 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (102.316597ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-699902] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (126.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-699902 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-699902 --driver=kvm2  --container-runtime=crio: (2m6.635064227s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-699902 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (126.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (131.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2125539097 start -p stopped-upgrade-632729 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2125539097 start -p stopped-upgrade-632729 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m22.33202992s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2125539097 -p stopped-upgrade-632729 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2125539097 -p stopped-upgrade-632729 stop: (2.130222509s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-632729 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-632729 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.644186706s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (131.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (55.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-699902 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-699902 --no-kubernetes --driver=kvm2  --container-runtime=crio: (54.636383077s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-699902 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-699902 status -o json: exit status 2 (258.092409ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-699902","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-699902
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (55.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (43.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-699902 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0429 19:52:31.960372   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-699902 --no-kubernetes --driver=kvm2  --container-runtime=crio: (43.704235133s)
--- PASS: TestNoKubernetes/serial/Start (43.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-632729
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestPause/serial/Start (63.11s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-467472 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-467472 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m3.10717723s)
--- PASS: TestPause/serial/Start (63.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-699902 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-699902 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.975476ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.350934104s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-699902
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-699902: (1.601966284s)
--- PASS: TestNoKubernetes/serial/Stop (1.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-699902 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-699902 --driver=kvm2  --container-runtime=crio: (42.343177086s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-699902 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-699902 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.204148ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-870155 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-870155 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (998.426713ms)

                                                
                                                
-- stdout --
	* [false-870155] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18774
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 19:53:53.716886   59879 out.go:291] Setting OutFile to fd 1 ...
	I0429 19:53:53.717012   59879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:53:53.717022   59879 out.go:304] Setting ErrFile to fd 2...
	I0429 19:53:53.717027   59879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 19:53:53.717254   59879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18774-7754/.minikube/bin
	I0429 19:53:53.717813   59879 out.go:298] Setting JSON to false
	I0429 19:53:53.718872   59879 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5732,"bootTime":1714414702,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0429 19:53:53.718962   59879 start.go:139] virtualization: kvm guest
	I0429 19:53:53.721116   59879 out.go:177] * [false-870155] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0429 19:53:53.722457   59879 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 19:53:53.723751   59879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 19:53:53.722456   59879 notify.go:220] Checking for updates...
	I0429 19:53:53.726287   59879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18774-7754/kubeconfig
	I0429 19:53:53.727543   59879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18774-7754/.minikube
	I0429 19:53:53.728767   59879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0429 19:53:53.729948   59879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 19:53:53.731521   59879 config.go:182] Loaded profile config "kubernetes-upgrade-935578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:53:53.731683   59879 config.go:182] Loaded profile config "pause-467472": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0429 19:53:53.731783   59879 config.go:182] Loaded profile config "running-upgrade-407092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0429 19:53:53.731870   59879 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 19:53:54.642776   59879 out.go:177] * Using the kvm2 driver based on user configuration
	I0429 19:53:54.644015   59879 start.go:297] selected driver: kvm2
	I0429 19:53:54.644031   59879 start.go:901] validating driver "kvm2" against <nil>
	I0429 19:53:54.644047   59879 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 19:53:54.645804   59879 out.go:177] 
	W0429 19:53:54.646925   59879 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0429 19:53:54.648030   59879 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-870155 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-870155

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-870155

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-870155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-870155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-870155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-870155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-870155

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-870155

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-870155

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-870155

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-870155

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-870155" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-870155" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Apr 2024 19:53:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.61.135:8443
name: running-upgrade-407092
contexts:
- context:
cluster: running-upgrade-407092
extensions:
- extension:
last-update: Mon, 29 Apr 2024 19:53:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: running-upgrade-407092
name: running-upgrade-407092
current-context: running-upgrade-407092
kind: Config
preferences: {}
users:
- name: running-upgrade-407092
user:
client-certificate: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/running-upgrade-407092/client.crt
client-key: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/running-upgrade-407092/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-870155

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-870155"

                                                
                                                
----------------------- debugLogs end: false-870155 [took: 3.888432003s] --------------------------------
helpers_test.go:175: Cleaning up "false-870155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-870155
--- PASS: TestNetworkPlugins/group/false (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (144.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-456788 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-456788 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (2m24.923148638s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (144.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (94.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-161370 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 19:57:48.915427   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-161370 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m34.704409299s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (94.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-161370 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [804c69e6-0763-4d83-90a0-c3c294eeffd4] Pending
helpers_test.go:344: "busybox" [804c69e6-0763-4d83-90a0-c3c294eeffd4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [804c69e6-0763-4d83-90a0-c3c294eeffd4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003995293s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-161370 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-161370 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-161370 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-456788 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e9a28ea1-3811-4d3f-99ea-7b44a56ce45f] Pending
helpers_test.go:344: "busybox" [e9a28ea1-3811-4d3f-99ea-7b44a56ce45f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e9a28ea1-3811-4d3f-99ea-7b44a56ce45f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005717792s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-456788 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-456788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-456788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.050936431s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-456788 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (65.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-866143 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 19:59:00.894387   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-866143 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m5.646876859s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (65.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-866143 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [60422741-64fe-4169-bdbd-384825776aef] Pending
helpers_test.go:344: "busybox" [60422741-64fe-4169-bdbd-384825776aef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [60422741-64fe-4169-bdbd-384825776aef] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004430879s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-866143 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-866143 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-866143 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.065306781s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-866143 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (686.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-161370 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-161370 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (11m26.479301033s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-161370 -n embed-certs-161370
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (686.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (605.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-456788 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-456788 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (10m5.072156011s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-456788 -n no-preload-456788
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (605.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-919612 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-919612 --alsologtostderr -v=3: (5.302516146s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-919612 -n old-k8s-version-919612: exit status 7 (75.036968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-919612 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (492.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-866143 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 20:02:48.916699   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 20:04:00.893800   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 20:07:48.915411   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
E0429 20:09:00.893818   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
E0429 20:09:11.961407   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-866143 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (8m11.73432833s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866143 -n default-k8s-diff-port-866143
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (492.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-538390 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0429 20:25:51.962591   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-538390 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (59.16012835s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m13.717592408s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (115.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m55.773818734s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (115.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-538390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-538390 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.658837876s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-538390 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-538390 --alsologtostderr -v=3: (7.411855932s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-538390 -n newest-cni-538390
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-538390 -n newest-cni-538390: exit status 7 (84.507209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-538390 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (54.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-538390 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-538390 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (54.653909112s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-538390 -n newest-cni-538390
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (54.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-870155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-870155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-l2mcd" [d77a7fe1-8d19-439d-aa67-10d5accaf809] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-l2mcd" [d77a7fe1-8d19-439d-aa67-10d5accaf809] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004526495s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-870155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (95.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0429 20:27:48.914359   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/functional-828689/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m35.653768471s)
--- PASS: TestNetworkPlugins/group/calico/Start (95.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-538390 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-538390 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-538390 -n newest-cni-538390
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-538390 -n newest-cni-538390: exit status 2 (275.908561ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-538390 -n newest-cni-538390
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-538390 -n newest-cni-538390: exit status 2 (253.726536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-538390 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-538390 -n newest-cni-538390
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-538390 -n newest-cni-538390
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (107.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m47.212569617s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (107.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-g44hf" [c79440d8-f360-4394-8514-c54efe175c9b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006171856s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-870155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-870155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4hxh9" [c1afa80b-bfd8-4c88-b350-80781f10b43e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0429 20:28:24.937075   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
E0429 20:28:24.942515   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
E0429 20:28:24.952838   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
E0429 20:28:24.973228   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
E0429 20:28:25.013620   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
E0429 20:28:25.093986   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
E0429 20:28:25.254517   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
E0429 20:28:25.574833   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
E0429 20:28:26.215788   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
E0429 20:28:27.496538   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
E0429 20:28:30.057450   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-4hxh9" [c1afa80b-bfd8-4c88-b350-80781f10b43e] Running
E0429 20:28:35.178538   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004205831s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-870155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m4.771990076s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (110.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0429 20:29:05.899798   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
E0429 20:29:12.060834   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
E0429 20:29:12.066129   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
E0429 20:29:12.076378   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
E0429 20:29:12.096663   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
E0429 20:29:12.137053   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
E0429 20:29:12.217367   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
E0429 20:29:12.377772   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
E0429 20:29:12.698862   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
E0429 20:29:13.339745   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
E0429 20:29:14.620254   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
E0429 20:29:17.180484   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
E0429 20:29:22.300701   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m50.272322919s)
--- PASS: TestNetworkPlugins/group/flannel/Start (110.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4x4lg" [4813d11a-2f98-4053-8649-63f685f91802] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007731944s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-870155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-870155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7qmjk" [037c655a-b528-4131-89e8-e762762e60c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0429 20:29:32.541387   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-7qmjk" [037c655a-b528-4131-89e8-e762762e60c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004738923s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-870155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-870155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-870155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jg85m" [a1fbbc76-fe14-4bd2-bb94-88c604168b51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0429 20:29:46.860010   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-jg85m" [a1fbbc76-fe14-4bd2-bb94-88c604168b51] Running
E0429 20:29:53.021987   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/old-k8s-version-919612/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005537109s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-870155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-870155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-870155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-p7ssd" [0197c254-3900-490d-8156-2355eaf66797] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0429 20:30:02.711316   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.crt: no such file or directory
E0429 20:30:02.716600   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.crt: no such file or directory
E0429 20:30:02.727055   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.crt: no such file or directory
E0429 20:30:02.747284   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.crt: no such file or directory
E0429 20:30:02.787626   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.crt: no such file or directory
E0429 20:30:02.868733   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.crt: no such file or directory
E0429 20:30:03.029590   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.crt: no such file or directory
E0429 20:30:03.350197   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.crt: no such file or directory
E0429 20:30:03.991174   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.crt: no such file or directory
E0429 20:30:05.271570   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-p7ssd" [0197c254-3900-490d-8156-2355eaf66797] Running
E0429 20:30:07.831892   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/default-k8s-diff-port-866143/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00720325s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (102.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-870155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m42.946468182s)
--- PASS: TestNetworkPlugins/group/bridge/Start (102.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-870155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7cnjl" [20690d81-9489-4cbe-a487-15d8ad95cd6d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005012706s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-870155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-870155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fljwx" [f4ff7ba6-72b6-48a3-bdf5-3d7af92024ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fljwx" [f4ff7ba6-72b6-48a3-bdf5-3d7af92024ee] Running
E0429 20:31:08.780369   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/no-preload-456788/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005543502s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-870155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-870155 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-870155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-l86dj" [e4f924f1-1007-4925-8dac-242258c0dbca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-l86dj" [e4f924f1-1007-4925-8dac-242258c0dbca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005020337s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-870155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-870155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (36/311)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.0/cached-images 0
15 TestDownloadOnly/v1.30.0/binaries 0
16 TestDownloadOnly/v1.30.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
116 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
119 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
262 TestStartStop/group/disable-driver-mounts 0.15
273 TestNetworkPlugins/group/kubenet 3.24
281 TestNetworkPlugins/group/cilium 4.26
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-193781" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-193781
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-870155 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-870155

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-870155

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-870155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-870155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-870155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-870155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-870155

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-870155

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-870155

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-870155

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-870155

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-870155" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-870155" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Apr 2024 19:53:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.61.135:8443
name: running-upgrade-407092
contexts:
- context:
cluster: running-upgrade-407092
extensions:
- extension:
last-update: Mon, 29 Apr 2024 19:53:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: running-upgrade-407092
name: running-upgrade-407092
current-context: running-upgrade-407092
kind: Config
preferences: {}
users:
- name: running-upgrade-407092
user:
client-certificate: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/running-upgrade-407092/client.crt
client-key: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/running-upgrade-407092/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-870155

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-870155"

                                                
                                                
----------------------- debugLogs end: kubenet-870155 [took: 3.061867064s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-870155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-870155
--- SKIP: TestNetworkPlugins/group/kubenet (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0429 19:54:00.893834   15124 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/addons-412183/client.crt: no such file or directory
panic.go:626: 
----------------------- debugLogs start: cilium-870155 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-870155" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Apr 2024 19:53:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.50.54:8443
name: pause-467472
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18774-7754/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Apr 2024 19:53:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.61.135:8443
name: running-upgrade-407092
contexts:
- context:
cluster: pause-467472
extensions:
- extension:
last-update: Mon, 29 Apr 2024 19:53:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: pause-467472
name: pause-467472
- context:
cluster: running-upgrade-407092
extensions:
- extension:
last-update: Mon, 29 Apr 2024 19:53:24 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: running-upgrade-407092
name: running-upgrade-407092
current-context: pause-467472
kind: Config
preferences: {}
users:
- name: pause-467472
user:
client-certificate: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/pause-467472/client.crt
client-key: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/pause-467472/client.key
- name: running-upgrade-407092
user:
client-certificate: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/running-upgrade-407092/client.crt
client-key: /home/jenkins/minikube-integration/18774-7754/.minikube/profiles/running-upgrade-407092/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-870155

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-870155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870155"

                                                
                                                
----------------------- debugLogs end: cilium-870155 [took: 4.102013729s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-870155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-870155
--- SKIP: TestNetworkPlugins/group/cilium (4.26s)

                                                
                                    
Copied to clipboard